Test Report: Docker_Linux_crio_arm64 21772

                    
                      32e66bacf90aad56df50495b30e504a3036ca148:2025-10-26:42070
                    
                

Test fail (39/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.76
35 TestAddons/parallel/Registry 18.95
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 143.92
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.35
41 TestAddons/parallel/CSI 48.41
42 TestAddons/parallel/Headlamp 3.17
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 8.49
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.28
97 TestFunctional/parallel/ServiceCmdConnect 603.67
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.93
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
135 TestFunctional/parallel/ServiceCmd/Format 0.41
136 TestFunctional/parallel/ServiceCmd/URL 0.44
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 426.98
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.41
190 TestJSONOutput/pause/Command 1.76
196 TestJSONOutput/unpause/Command 2.18
255 TestKubernetesUpgrade 551.52
280 TestPause/serial/Pause 6.86
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.53
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.61
309 TestStartStop/group/old-k8s-version/serial/Pause 6.42
315 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.22
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.61
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.99
331 TestStartStop/group/embed-certs/serial/Pause 6.23
337 TestStartStop/group/no-preload/serial/Pause 7.68
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.07
347 TestStartStop/group/newest-cni/serial/Pause 6.55
x
+
TestAddons/serial/Volcano (0.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable volcano --alsologtostderr -v=1: exit status 11 (757.269189ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:12.721716  302188 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:12.723404  302188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:12.723470  302188 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:12.723492  302188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:12.723888  302188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:12.724232  302188 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:12.724655  302188 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:12.724694  302188 addons.go:606] checking whether the cluster is paused
	I1026 08:16:12.724839  302188 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:12.724871  302188 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:12.725401  302188 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:12.760681  302188 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:12.760747  302188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:12.777769  302188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:12.881057  302188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:12.881218  302188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:12.913524  302188 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:12.913555  302188 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:12.913560  302188 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:12.913564  302188 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:12.913567  302188 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:12.913570  302188 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:12.913575  302188 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:12.913578  302188 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:12.913581  302188 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:12.913587  302188 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:12.913591  302188 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:12.913594  302188 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:12.913601  302188 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:12.913605  302188 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:12.913608  302188 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:12.913612  302188 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:12.913618  302188 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:12.913622  302188 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:12.913625  302188 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:12.913628  302188 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:12.913633  302188 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:12.913636  302188 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:12.913638  302188 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:12.913641  302188 cri.go:89] found id: ""
	I1026 08:16:12.913691  302188 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:12.928756  302188 out.go:203] 
	W1026 08:16:12.931537  302188 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:12.931561  302188 out.go:285] * 
	* 
	W1026 08:16:13.383530  302188 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:13.389039  302188 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.923253ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004410204s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004183496s
addons_test.go:392: (dbg) Run:  kubectl --context addons-178002 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-178002 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-178002 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.371701075s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 ip
2025/10/26 08:16:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable registry --alsologtostderr -v=1: exit status 11 (274.749728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:42.515623  303284 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:42.516424  303284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:42.516467  303284 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:42.516491  303284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:42.516777  303284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:42.517125  303284 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:42.517537  303284 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:42.517582  303284 addons.go:606] checking whether the cluster is paused
	I1026 08:16:42.517713  303284 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:42.517749  303284 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:42.518334  303284 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:42.540037  303284 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:42.540097  303284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:42.559495  303284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:42.669649  303284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:42.669726  303284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:42.709039  303284 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:42.709061  303284 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:42.709067  303284 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:42.709077  303284 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:42.709080  303284 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:42.709084  303284 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:42.709087  303284 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:42.709091  303284 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:42.709093  303284 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:42.709100  303284 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:42.709103  303284 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:42.709106  303284 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:42.709110  303284 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:42.709113  303284 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:42.709117  303284 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:42.709122  303284 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:42.709129  303284 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:42.709133  303284 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:42.709136  303284 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:42.709139  303284 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:42.709144  303284 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:42.709146  303284 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:42.709149  303284 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:42.709152  303284 cri.go:89] found id: ""
	I1026 08:16:42.709201  303284 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:42.724598  303284 out.go:203] 
	W1026 08:16:42.727668  303284 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:42.727697  303284 out.go:285] * 
	* 
	W1026 08:16:42.734010  303284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:42.737134  303284 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (18.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.49224ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-178002
addons_test.go:332: (dbg) Run:  kubectl --context addons-178002 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (261.945457ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:15.704762  304282 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:15.705362  304282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:15.705379  304282 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:15.705385  304282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:15.705694  304282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:15.706005  304282 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:15.706382  304282 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:15.706401  304282 addons.go:606] checking whether the cluster is paused
	I1026 08:17:15.706504  304282 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:15.706519  304282 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:15.707067  304282 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:15.728855  304282 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:15.728932  304282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:15.747730  304282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:15.853808  304282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:15.853914  304282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:15.884414  304282 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:15.884441  304282 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:15.884446  304282 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:15.884450  304282 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:15.884453  304282 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:15.884457  304282 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:15.884460  304282 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:15.884463  304282 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:15.884466  304282 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:15.884473  304282 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:15.884476  304282 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:15.884479  304282 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:15.884482  304282 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:15.884485  304282 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:15.884488  304282 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:15.884493  304282 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:15.884496  304282 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:15.884499  304282 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:15.884502  304282 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:15.884505  304282 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:15.884510  304282 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:15.884513  304282 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:15.884516  304282 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:15.884519  304282 cri.go:89] found id: ""
	I1026 08:17:15.884572  304282 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:15.899699  304282 out.go:203] 
	W1026 08:17:15.902516  304282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:15.902538  304282 out.go:285] * 
	* 
	W1026 08:17:15.908998  304282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:15.911933  304282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-178002 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-178002 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-178002 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5a5eaff3-d173-4f16-9435-c99dd169b3c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5a5eaff3-d173-4f16-9435-c99dd169b3c6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004381261s
I1026 08:17:04.071911  295475 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.192821106s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-178002 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-178002
helpers_test.go:243: (dbg) docker inspect addons-178002:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d",
	        "Created": "2025-10-26T08:13:45.784640711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:13:45.860078529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/hosts",
	        "LogPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d-json.log",
	        "Name": "/addons-178002",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-178002:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-178002",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d",
	                "LowerDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-178002",
	                "Source": "/var/lib/docker/volumes/addons-178002/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-178002",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-178002",
	                "name.minikube.sigs.k8s.io": "addons-178002",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7556e840e06a3f4a8a4fa7532564ee2fc4edac05a65cf8074f12fffa2b7b8e77",
	            "SandboxKey": "/var/run/docker/netns/7556e840e06a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-178002": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:f0:75:08:9c:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f5b2696acfb438fa073a6590a62d488c86d7998d0b7b91c4da9e01aeed87153",
	                    "EndpointID": "93c81d2c3104b04872040ce9c170acb47786dd8730969cfa86f13f8ccfa90b72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-178002",
	                        "b10aa919ba5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-178002 -n addons-178002
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-178002 logs -n 25: (1.49705918s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-436037                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-436037 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ start   │ --download-only -p binary-mirror-991071 --alsologtostderr --binary-mirror http://127.0.0.1:45987 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-991071   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ -p binary-mirror-991071                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-991071   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ addons  │ disable dashboard -p addons-178002                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-178002                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ start   │ -p addons-178002 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:16 UTC │
	│ addons  │ addons-178002 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ addons-178002 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-178002 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ addons-178002 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ ip      │ addons-178002 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │ 26 Oct 25 08:16 UTC │
	│ addons  │ addons-178002 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ addons-178002 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ addons-178002 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ ssh     │ addons-178002 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ addons-178002 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ addons-178002 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-178002                                                                                                                                                                                                                                                                                                                                                                                           │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │ 26 Oct 25 08:17 UTC │
	│ addons  │ addons-178002 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ addons-178002 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ addons-178002 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ ssh     │ addons-178002 ssh cat /opt/local-path-provisioner/pvc-8792530b-a4c3-4092-b81e-3346c6acb3ac_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │ 26 Oct 25 08:17 UTC │
	│ addons  │ addons-178002 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ addons  │ addons-178002 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:17 UTC │                     │
	│ ip      │ addons-178002 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:19 UTC │ 26 Oct 25 08:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:13:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:13:20.922404  296225 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:13:20.922581  296225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:20.922604  296225 out.go:374] Setting ErrFile to fd 2...
	I1026 08:13:20.922623  296225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:20.922960  296225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:13:20.923436  296225 out.go:368] Setting JSON to false
	I1026 08:13:20.924290  296225 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6951,"bootTime":1761459450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:13:20.924390  296225 start.go:141] virtualization:  
	I1026 08:13:20.935389  296225 out.go:179] * [addons-178002] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:13:20.946013  296225 notify.go:220] Checking for updates...
	I1026 08:13:20.965934  296225 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:13:20.996439  296225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:13:21.031225  296225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:13:21.062180  296225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:13:21.085424  296225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:13:21.119476  296225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:13:21.151206  296225 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:13:21.172127  296225 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:13:21.172271  296225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:21.234026  296225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 08:13:21.224352643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:21.234128  296225 docker.go:318] overlay module found
	I1026 08:13:21.270010  296225 out.go:179] * Using the docker driver based on user configuration
	I1026 08:13:21.302208  296225 start.go:305] selected driver: docker
	I1026 08:13:21.302238  296225 start.go:925] validating driver "docker" against <nil>
	I1026 08:13:21.302254  296225 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:13:21.303022  296225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:21.373137  296225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 08:13:21.363263084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:21.373298  296225 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:13:21.373522  296225 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:13:21.392997  296225 out.go:179] * Using Docker driver with root privileges
	I1026 08:13:21.427317  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:13:21.427413  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:13:21.427426  296225 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:13:21.427515  296225 start.go:349] cluster config:
	{Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 08:13:21.458155  296225 out.go:179] * Starting "addons-178002" primary control-plane node in "addons-178002" cluster
	I1026 08:13:21.489531  296225 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:13:21.523343  296225 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:13:21.563070  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:21.563071  296225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:13:21.563154  296225 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:13:21.563165  296225 cache.go:58] Caching tarball of preloaded images
	I1026 08:13:21.563243  296225 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:13:21.563253  296225 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:13:21.563637  296225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json ...
	I1026 08:13:21.563672  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json: {Name:mk9b0a2e0e4ccf16030eb426a52449eb315471fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:21.580461  296225 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 08:13:21.580630  296225 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 08:13:21.580656  296225 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 08:13:21.580665  296225 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 08:13:21.580673  296225 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 08:13:21.580679  296225 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 08:13:39.653167  296225 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 08:13:39.653200  296225 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:13:39.653231  296225 start.go:360] acquireMachinesLock for addons-178002: {Name:mke1fda8b123db5306a3ea50855b62b314240b5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:13:39.654130  296225 start.go:364] duration metric: took 875.289µs to acquireMachinesLock for "addons-178002"
	I1026 08:13:39.654177  296225 start.go:93] Provisioning new machine with config: &{Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:13:39.654267  296225 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:13:39.657584  296225 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 08:13:39.657826  296225 start.go:159] libmachine.API.Create for "addons-178002" (driver="docker")
	I1026 08:13:39.657877  296225 client.go:168] LocalClient.Create starting
	I1026 08:13:39.657994  296225 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 08:13:39.869081  296225 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 08:13:40.158499  296225 cli_runner.go:164] Run: docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:13:40.174065  296225 cli_runner.go:211] docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:13:40.174164  296225 network_create.go:284] running [docker network inspect addons-178002] to gather additional debugging logs...
	I1026 08:13:40.174185  296225 cli_runner.go:164] Run: docker network inspect addons-178002
	W1026 08:13:40.191633  296225 cli_runner.go:211] docker network inspect addons-178002 returned with exit code 1
	I1026 08:13:40.191664  296225 network_create.go:287] error running [docker network inspect addons-178002]: docker network inspect addons-178002: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-178002 not found
	I1026 08:13:40.191678  296225 network_create.go:289] output of [docker network inspect addons-178002]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-178002 not found
	
	** /stderr **
	I1026 08:13:40.191771  296225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:13:40.208792  296225 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7c80}
	I1026 08:13:40.208839  296225 network_create.go:124] attempt to create docker network addons-178002 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 08:13:40.208896  296225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-178002 addons-178002
	I1026 08:13:40.266159  296225 network_create.go:108] docker network addons-178002 192.168.49.0/24 created
	I1026 08:13:40.266188  296225 kic.go:121] calculated static IP "192.168.49.2" for the "addons-178002" container
	I1026 08:13:40.266261  296225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:13:40.280089  296225 cli_runner.go:164] Run: docker volume create addons-178002 --label name.minikube.sigs.k8s.io=addons-178002 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:13:40.302549  296225 oci.go:103] Successfully created a docker volume addons-178002
	I1026 08:13:40.302676  296225 cli_runner.go:164] Run: docker run --rm --name addons-178002-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-178002 --entrypoint /usr/bin/test -v addons-178002:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:13:41.263386  296225 oci.go:107] Successfully prepared a docker volume addons-178002
	I1026 08:13:41.263429  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:41.263449  296225 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:13:41.263513  296225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-178002:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:13:45.716244  296225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-178002:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.45267068s)
	I1026 08:13:45.716278  296225 kic.go:203] duration metric: took 4.452825291s to extract preloaded images to volume ...
	W1026 08:13:45.716441  296225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 08:13:45.716582  296225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:13:45.770130  296225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-178002 --name addons-178002 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-178002 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-178002 --network addons-178002 --ip 192.168.49.2 --volume addons-178002:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:13:46.073788  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Running}}
	I1026 08:13:46.092017  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.117792  296225 cli_runner.go:164] Run: docker exec addons-178002 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:13:46.169051  296225 oci.go:144] the created container "addons-178002" has a running status.
	I1026 08:13:46.169094  296225 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa...
	I1026 08:13:46.595046  296225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:13:46.614478  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.630245  296225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:13:46.630264  296225 kic_runner.go:114] Args: [docker exec --privileged addons-178002 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:13:46.669205  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.686343  296225 machine.go:93] provisionDockerMachine start ...
	I1026 08:13:46.686435  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:46.703700  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:46.704031  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:46.704047  296225 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:13:46.704618  296225 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50380->127.0.0.1:33140: read: connection reset by peer
	I1026 08:13:49.854297  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-178002
	
	I1026 08:13:49.854321  296225 ubuntu.go:182] provisioning hostname "addons-178002"
	I1026 08:13:49.854386  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:49.872042  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:49.872374  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:49.872392  296225 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-178002 && echo "addons-178002" | sudo tee /etc/hostname
	I1026 08:13:50.033346  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-178002
	
	I1026 08:13:50.033430  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.052103  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:50.052408  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:50.052432  296225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-178002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-178002/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-178002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:13:50.199402  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:13:50.199490  296225 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:13:50.199555  296225 ubuntu.go:190] setting up certificates
	I1026 08:13:50.199589  296225 provision.go:84] configureAuth start
	I1026 08:13:50.199686  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:50.215834  296225 provision.go:143] copyHostCerts
	I1026 08:13:50.215921  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:13:50.216083  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:13:50.216164  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:13:50.216226  296225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.addons-178002 san=[127.0.0.1 192.168.49.2 addons-178002 localhost minikube]
	I1026 08:13:50.435359  296225 provision.go:177] copyRemoteCerts
	I1026 08:13:50.435426  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:13:50.435468  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.453201  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:50.554780  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:13:50.572407  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:13:50.590296  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:13:50.608232  296225 provision.go:87] duration metric: took 408.614622ms to configureAuth
	I1026 08:13:50.608306  296225 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:13:50.608527  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:13:50.608644  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.626407  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:50.626770  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:50.626791  296225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:13:50.883942  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:13:50.883966  296225 machine.go:96] duration metric: took 4.197598925s to provisionDockerMachine
	I1026 08:13:50.883986  296225 client.go:171] duration metric: took 11.226098649s to LocalClient.Create
	I1026 08:13:50.884000  296225 start.go:167] duration metric: took 11.226176319s to libmachine.API.Create "addons-178002"
	I1026 08:13:50.884008  296225 start.go:293] postStartSetup for "addons-178002" (driver="docker")
	I1026 08:13:50.884018  296225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:13:50.884094  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:13:50.884144  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.902250  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.006532  296225 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:13:51.015272  296225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:13:51.015306  296225 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:13:51.015319  296225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:13:51.015426  296225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:13:51.015459  296225 start.go:296] duration metric: took 131.4447ms for postStartSetup
	I1026 08:13:51.015809  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:51.033271  296225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json ...
	I1026 08:13:51.033561  296225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:13:51.033611  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.050702  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.156336  296225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:13:51.161537  296225 start.go:128] duration metric: took 11.507252441s to createHost
	I1026 08:13:51.161577  296225 start.go:83] releasing machines lock for "addons-178002", held for 11.507411917s
	I1026 08:13:51.161698  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:51.179682  296225 ssh_runner.go:195] Run: cat /version.json
	I1026 08:13:51.179739  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.179771  296225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:13:51.179838  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.198635  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.200221  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.298421  296225 ssh_runner.go:195] Run: systemctl --version
	I1026 08:13:51.393097  296225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:13:51.433424  296225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:13:51.437544  296225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:13:51.437660  296225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:13:51.465536  296225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 08:13:51.465568  296225 start.go:495] detecting cgroup driver to use...
	I1026 08:13:51.465602  296225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:13:51.465651  296225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:13:51.482580  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:13:51.494669  296225 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:13:51.494903  296225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:13:51.512764  296225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:13:51.530963  296225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:13:51.648831  296225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:13:51.779518  296225 docker.go:234] disabling docker service ...
	I1026 08:13:51.779624  296225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:13:51.801654  296225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:13:51.814306  296225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:13:51.931086  296225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:13:52.049566  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:13:52.063057  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:13:52.081428  296225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:13:52.081501  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.091183  296225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:13:52.091258  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.100194  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.109190  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.118089  296225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:13:52.126685  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.135958  296225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.149275  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.157928  296225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:13:52.165130  296225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:13:52.172623  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:13:52.286773  296225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:13:52.411954  296225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:13:52.412064  296225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:13:52.416076  296225 start.go:563] Will wait 60s for crictl version
	I1026 08:13:52.416174  296225 ssh_runner.go:195] Run: which crictl
	I1026 08:13:52.419781  296225 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:13:52.447661  296225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:13:52.447795  296225 ssh_runner.go:195] Run: crio --version
	I1026 08:13:52.477049  296225 ssh_runner.go:195] Run: crio --version
	I1026 08:13:52.509012  296225 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:13:52.512070  296225 cli_runner.go:164] Run: docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:13:52.527592  296225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:13:52.531227  296225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:13:52.540607  296225 kubeadm.go:883] updating cluster {Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:13:52.540727  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:52.540784  296225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:13:52.572277  296225 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:13:52.572302  296225 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:13:52.572364  296225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:13:52.601071  296225 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:13:52.601093  296225 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:13:52.601100  296225 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 08:13:52.601198  296225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-178002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:13:52.601281  296225 ssh_runner.go:195] Run: crio config
	I1026 08:13:52.671032  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:13:52.671052  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:13:52.671077  296225 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:13:52.671100  296225 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-178002 NodeName:addons-178002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:13:52.671234  296225 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-178002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:13:52.671317  296225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:13:52.678773  296225 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:13:52.678907  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:13:52.686288  296225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:13:52.698573  296225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:13:52.712113  296225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1026 08:13:52.724460  296225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:13:52.727945  296225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:13:52.737479  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:13:52.849416  296225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:13:52.864413  296225 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002 for IP: 192.168.49.2
	I1026 08:13:52.864474  296225 certs.go:195] generating shared ca certs ...
	I1026 08:13:52.864505  296225 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:52.864655  296225 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:13:53.283672  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt ...
	I1026 08:13:53.283705  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt: {Name:mk52185ba7eb3198f2aa31696853a84dc9f3f8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.284486  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key ...
	I1026 08:13:53.284506  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key: {Name:mkafbd07ac86bd46c3008360f658487f62084ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.285172  296225 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:13:53.726000  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt ...
	I1026 08:13:53.726035  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt: {Name:mka3c5574cf58a4e94452c3f4733046bf7166c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.726807  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key ...
	I1026 08:13:53.726825  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key: {Name:mkaeb2e1f0d31de3904b996c06503d5146d83c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.726913  296225 certs.go:257] generating profile certs ...
	I1026 08:13:53.726973  296225 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key
	I1026 08:13:53.726990  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt with IP's: []
	I1026 08:13:54.254950  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt ...
	I1026 08:13:54.254993  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: {Name:mk843ba9d2ee337ff36af75db50ad7a49e181329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.255180  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key ...
	I1026 08:13:54.255192  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key: {Name:mkcb6018a2404aa7732c5e6fa2e629573b1667c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.255863  296225 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a
	I1026 08:13:54.255886  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 08:13:54.769867  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a ...
	I1026 08:13:54.769899  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a: {Name:mkba926599df4eb92c1da5cce3de24ed428d8993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.770693  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a ...
	I1026 08:13:54.770744  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a: {Name:mk1eb95ff6463545d43368ede656f0681d894143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.771412  296225 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt
	I1026 08:13:54.771501  296225 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key
	I1026 08:13:54.771557  296225 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key
	I1026 08:13:54.771582  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt with IP's: []
	I1026 08:13:55.354395  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt ...
	I1026 08:13:55.354426  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt: {Name:mk121ba1de94c2992d6b7dab04979c44c5e525e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:55.354625  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key ...
	I1026 08:13:55.354640  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key: {Name:mk6079091ca4029534e435284ffdb6f35be44b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:55.354858  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:13:55.354900  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:13:55.354924  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:13:55.354957  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:13:55.355574  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:13:55.375981  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:13:55.396532  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:13:55.414835  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:13:55.433075  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 08:13:55.451449  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:13:55.470342  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:13:55.487953  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:13:55.505821  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:13:55.524024  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:13:55.537773  296225 ssh_runner.go:195] Run: openssl version
	I1026 08:13:55.544305  296225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:13:55.553249  296225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.557119  296225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.557186  296225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.598589  296225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:13:55.607437  296225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:13:55.611302  296225 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:13:55.611395  296225 kubeadm.go:400] StartCluster: {Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:13:55.611494  296225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:13:55.611555  296225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:13:55.642211  296225 cri.go:89] found id: ""
	I1026 08:13:55.642292  296225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:13:55.650505  296225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:13:55.658243  296225 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:13:55.658339  296225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:13:55.666112  296225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:13:55.666132  296225 kubeadm.go:157] found existing configuration files:
	
	I1026 08:13:55.666186  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:13:55.673915  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:13:55.673984  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:13:55.681456  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:13:55.689392  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:13:55.689505  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:13:55.697275  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:13:55.705233  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:13:55.705299  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:13:55.712953  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:13:55.720435  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:13:55.720500  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:13:55.727747  296225 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:13:55.768223  296225 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:13:55.768452  296225 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:13:55.791765  296225 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:13:55.791935  296225 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 08:13:55.792020  296225 kubeadm.go:318] OS: Linux
	I1026 08:13:55.792115  296225 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:13:55.792183  296225 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 08:13:55.792246  296225 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:13:55.792304  296225 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:13:55.792377  296225 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:13:55.792471  296225 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:13:55.792526  296225 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:13:55.792581  296225 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:13:55.792654  296225 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 08:13:55.862807  296225 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:13:55.862922  296225 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:13:55.863023  296225 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:13:55.872144  296225 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 08:13:55.878923  296225 out.go:252]   - Generating certificates and keys ...
	I1026 08:13:55.879043  296225 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:13:55.879119  296225 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:13:56.173764  296225 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:13:56.681063  296225 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:13:57.194081  296225 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:13:57.649246  296225 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:13:58.005153  296225 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:13:58.005486  296225 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-178002 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 08:13:58.068174  296225 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:13:58.068544  296225 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-178002 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 08:13:58.660812  296225 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:13:59.906676  296225 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 08:14:01.234743  296225 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:14:01.235045  296225 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:14:01.931079  296225 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:14:02.129063  296225 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:14:02.405716  296225 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:14:03.639607  296225 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:14:04.139099  296225 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:14:04.140112  296225 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:14:04.144271  296225 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:14:04.147600  296225 out.go:252]   - Booting up control plane ...
	I1026 08:14:04.147713  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:14:04.147862  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:14:04.147934  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:14:04.162921  296225 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:14:04.163315  296225 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:14:04.171206  296225 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:14:04.171573  296225 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:14:04.171623  296225 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:14:04.304403  296225 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:14:04.304532  296225 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:14:05.807107  296225 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500948639s
	I1026 08:14:05.807989  296225 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:14:05.808112  296225 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 08:14:05.808292  296225 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:14:05.808386  296225 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:14:10.017994  296225 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.208896716s
	I1026 08:14:10.282674  296225 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.474568889s
	I1026 08:14:12.311215  296225 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502714734s
	I1026 08:14:12.332613  296225 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:14:12.347761  296225 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:14:12.362562  296225 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:14:12.362810  296225 kubeadm.go:318] [mark-control-plane] Marking the node addons-178002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:14:12.375067  296225 kubeadm.go:318] [bootstrap-token] Using token: ipyrbz.o4znooj9wtawkvdk
	I1026 08:14:12.378095  296225 out.go:252]   - Configuring RBAC rules ...
	I1026 08:14:12.378246  296225 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:14:12.382167  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:14:12.390817  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:14:12.395041  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:14:12.401459  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:14:12.405552  296225 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:14:12.718008  296225 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:14:13.168625  296225 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:14:13.718882  296225 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:14:13.719997  296225 kubeadm.go:318] 
	I1026 08:14:13.720076  296225 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:14:13.720087  296225 kubeadm.go:318] 
	I1026 08:14:13.720165  296225 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:14:13.720173  296225 kubeadm.go:318] 
	I1026 08:14:13.720199  296225 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:14:13.720262  296225 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:14:13.720319  296225 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:14:13.720328  296225 kubeadm.go:318] 
	I1026 08:14:13.720382  296225 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:14:13.720390  296225 kubeadm.go:318] 
	I1026 08:14:13.720438  296225 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:14:13.720453  296225 kubeadm.go:318] 
	I1026 08:14:13.720508  296225 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:14:13.720587  296225 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:14:13.720659  296225 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:14:13.720667  296225 kubeadm.go:318] 
	I1026 08:14:13.720752  296225 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:14:13.720831  296225 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:14:13.720840  296225 kubeadm.go:318] 
	I1026 08:14:13.720924  296225 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ipyrbz.o4znooj9wtawkvdk \
	I1026 08:14:13.721031  296225 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 08:14:13.721055  296225 kubeadm.go:318] 	--control-plane 
	I1026 08:14:13.721064  296225 kubeadm.go:318] 
	I1026 08:14:13.721149  296225 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:14:13.721157  296225 kubeadm.go:318] 
	I1026 08:14:13.721238  296225 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ipyrbz.o4znooj9wtawkvdk \
	I1026 08:14:13.721343  296225 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 08:14:13.724013  296225 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 08:14:13.724251  296225 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 08:14:13.724363  296225 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:14:13.724384  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:14:13.724394  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:14:13.727579  296225 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:14:13.730568  296225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:14:13.734472  296225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:14:13.734533  296225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:14:13.747492  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:14:14.052061  296225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:14:14.052196  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:14.052277  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-178002 minikube.k8s.io/updated_at=2025_10_26T08_14_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=addons-178002 minikube.k8s.io/primary=true
	I1026 08:14:14.244747  296225 ops.go:34] apiserver oom_adj: -16
	I1026 08:14:14.244852  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:14.745202  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:15.245826  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:15.744985  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:16.244964  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:16.745002  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:17.244999  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:17.745774  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:18.244997  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:18.340066  296225 kubeadm.go:1113] duration metric: took 4.287920924s to wait for elevateKubeSystemPrivileges
	I1026 08:14:18.340105  296225 kubeadm.go:402] duration metric: took 22.728713047s to StartCluster
	I1026 08:14:18.340127  296225 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:14:18.340890  296225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:14:18.341270  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:14:18.341493  296225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:14:18.341644  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:14:18.341915  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:14:18.341935  296225 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 08:14:18.342036  296225 addons.go:69] Setting yakd=true in profile "addons-178002"
	I1026 08:14:18.342050  296225 addons.go:238] Setting addon yakd=true in "addons-178002"
	I1026 08:14:18.342067  296225 addons.go:69] Setting inspektor-gadget=true in profile "addons-178002"
	I1026 08:14:18.342077  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.342081  296225 addons.go:238] Setting addon inspektor-gadget=true in "addons-178002"
	I1026 08:14:18.342102  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.342599  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.342612  296225 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-178002"
	I1026 08:14:18.342625  296225 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-178002"
	I1026 08:14:18.342644  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.343046  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.343147  296225 addons.go:69] Setting metrics-server=true in profile "addons-178002"
	I1026 08:14:18.343164  296225 addons.go:238] Setting addon metrics-server=true in "addons-178002"
	I1026 08:14:18.343187  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.343606  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.346190  296225 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-178002"
	I1026 08:14:18.346231  296225 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-178002"
	I1026 08:14:18.346267  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.346783  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.346994  296225 addons.go:69] Setting cloud-spanner=true in profile "addons-178002"
	I1026 08:14:18.347053  296225 addons.go:238] Setting addon cloud-spanner=true in "addons-178002"
	I1026 08:14:18.347099  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.347539  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347731  296225 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-178002"
	I1026 08:14:18.362231  296225 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-178002"
	I1026 08:14:18.362266  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.362794  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347743  296225 addons.go:69] Setting default-storageclass=true in profile "addons-178002"
	I1026 08:14:18.379729  296225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-178002"
	I1026 08:14:18.380114  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347749  296225 addons.go:69] Setting gcp-auth=true in profile "addons-178002"
	I1026 08:14:18.382161  296225 mustload.go:65] Loading cluster: addons-178002
	I1026 08:14:18.382442  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:14:18.402006  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347755  296225 addons.go:69] Setting ingress=true in profile "addons-178002"
	I1026 08:14:18.413909  296225 addons.go:238] Setting addon ingress=true in "addons-178002"
	I1026 08:14:18.413978  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.414505  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347760  296225 addons.go:69] Setting ingress-dns=true in profile "addons-178002"
	I1026 08:14:18.458849  296225 addons.go:238] Setting addon ingress-dns=true in "addons-178002"
	I1026 08:14:18.458932  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.459683  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347806  296225 out.go:179] * Verifying Kubernetes components...
	I1026 08:14:18.348232  296225 addons.go:69] Setting volcano=true in profile "addons-178002"
	I1026 08:14:18.473215  296225 addons.go:238] Setting addon volcano=true in "addons-178002"
	I1026 08:14:18.473266  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.473729  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.473877  296225 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 08:14:18.348244  296225 addons.go:69] Setting registry=true in profile "addons-178002"
	I1026 08:14:18.484742  296225 addons.go:238] Setting addon registry=true in "addons-178002"
	I1026 08:14:18.484783  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.485251  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.492683  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 08:14:18.492760  296225 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 08:14:18.492865  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.348251  296225 addons.go:69] Setting registry-creds=true in profile "addons-178002"
	I1026 08:14:18.494260  296225 addons.go:238] Setting addon registry-creds=true in "addons-178002"
	I1026 08:14:18.494297  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.494802  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.348257  296225 addons.go:69] Setting storage-provisioner=true in profile "addons-178002"
	I1026 08:14:18.510842  296225 addons.go:238] Setting addon storage-provisioner=true in "addons-178002"
	I1026 08:14:18.510885  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.511376  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.511619  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:14:18.515660  296225 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 08:14:18.521621  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 08:14:18.521689  296225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 08:14:18.521782  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.348262  296225 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-178002"
	I1026 08:14:18.523608  296225 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-178002"
	I1026 08:14:18.523953  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.342601  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.348270  296225 addons.go:69] Setting volumesnapshots=true in profile "addons-178002"
	I1026 08:14:18.576743  296225 addons.go:238] Setting addon volumesnapshots=true in "addons-178002"
	I1026 08:14:18.576785  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.610870  296225 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 08:14:18.613753  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.648480  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 08:14:18.650469  296225 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 08:14:18.651018  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.651076  296225 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 08:14:18.652558  296225 addons.go:238] Setting addon default-storageclass=true in "addons-178002"
	I1026 08:14:18.660693  296225 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 08:14:18.660717  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 08:14:18.660785  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.663839  296225 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 08:14:18.665271  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 08:14:18.665338  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.667117  296225 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 08:14:18.667168  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 08:14:18.667256  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.687621  296225 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 08:14:18.664687  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:14:18.665254  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 08:14:18.690944  296225 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 08:14:18.691021  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 08:14:18.691124  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.696051  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 08:14:18.699030  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 08:14:18.709441  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.710469  296225 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 08:14:18.715267  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 08:14:18.715513  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.716019  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.734613  296225 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 08:14:18.739517  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1026 08:14:18.783731  296225 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 08:14:18.791384  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:18.791801  296225 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 08:14:18.791818  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 08:14:18.791892  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.805846  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 08:14:18.812508  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.816882  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 08:14:18.817095  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:18.823990  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 08:14:18.831134  296225 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 08:14:18.831182  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 08:14:18.831270  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.832760  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 08:14:18.832784  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 08:14:18.832863  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.865957  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 08:14:18.866029  296225 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:14:18.872754  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 08:14:18.872785  296225 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 08:14:18.872852  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.873028  296225 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:14:18.873044  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:14:18.873086  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.895179  296225 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 08:14:18.900572  296225 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 08:14:18.900594  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 08:14:18.900668  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.903353  296225 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-178002"
	I1026 08:14:18.903390  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.903796  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.919317  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.920394  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.921055  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.922755  296225 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:14:18.922771  296225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:14:18.922829  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.924147  296225 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 08:14:18.927066  296225 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 08:14:18.927089  296225 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 08:14:18.927153  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.983216  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.013276  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.036749  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.069412  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.072832  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.086419  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.099454  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.110446  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.111373  296225 out.go:179]   - Using image docker.io/busybox:stable
	I1026 08:14:19.113007  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	W1026 08:14:19.115008  296225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 08:14:19.115041  296225 retry.go:31] will retry after 162.946604ms: ssh: handshake failed: EOF
	I1026 08:14:19.117712  296225 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 08:14:19.120774  296225 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 08:14:19.120797  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 08:14:19.120857  296225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:14:19.120863  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:19.156041  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.417470  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 08:14:19.417532  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 08:14:19.436470  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 08:14:19.436533  296225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 08:14:19.546371  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 08:14:19.619412  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 08:14:19.650494  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 08:14:19.650528  296225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 08:14:19.653412  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:14:19.653686  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 08:14:19.653700  296225 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 08:14:19.667178  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 08:14:19.679892  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:14:19.691109  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 08:14:19.693841  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 08:14:19.693881  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 08:14:19.713358  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 08:14:19.734906  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 08:14:19.747771  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 08:14:19.758864  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 08:14:19.773369  296225 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 08:14:19.773412  296225 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 08:14:19.775742  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 08:14:19.775766  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 08:14:19.793650  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 08:14:19.793678  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 08:14:19.854698  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 08:14:19.854781  296225 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 08:14:19.932621  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 08:14:19.932643  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 08:14:19.971710  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 08:14:19.971741  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 08:14:19.972580  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 08:14:19.972597  296225 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 08:14:19.979613  296225 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 08:14:19.979638  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 08:14:20.001821  296225 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:20.001855  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 08:14:20.195597  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 08:14:20.195623  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 08:14:20.214571  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 08:14:20.214615  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 08:14:20.232492  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 08:14:20.232519  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 08:14:20.316811  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 08:14:20.329861  296225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639970968s)
	I1026 08:14:20.329886  296225 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 08:14:20.329832  296225 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2089502s)
	I1026 08:14:20.331481  296225 node_ready.go:35] waiting up to 6m0s for node "addons-178002" to be "Ready" ...
	I1026 08:14:20.344174  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:20.438075  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 08:14:20.438100  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 08:14:20.504651  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 08:14:20.504673  296225 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 08:14:20.529543  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 08:14:20.618064  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 08:14:20.618129  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 08:14:20.761501  296225 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:20.761567  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 08:14:20.835896  296225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-178002" context rescaled to 1 replicas
	I1026 08:14:20.980642  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 08:14:20.980721  296225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 08:14:21.033516  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:21.138044  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.591629581s)
	I1026 08:14:21.234903  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.615454557s)
	I1026 08:14:21.234995  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.581561204s)
	I1026 08:14:21.256374  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 08:14:21.256451  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 08:14:21.508294  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.841077547s)
	I1026 08:14:21.571319  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 08:14:21.571400  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 08:14:21.722807  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 08:14:21.722941  296225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 08:14:21.912228  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1026 08:14:22.375930  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:22.849765  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.158620181s)
	I1026 08:14:22.849871  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.169954197s)
	I1026 08:14:23.429460  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.716066123s)
	I1026 08:14:23.429679  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.694748887s)
	I1026 08:14:23.754777  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.006964678s)
	I1026 08:14:23.754812  296225 addons.go:479] Verifying addon metrics-server=true in "addons-178002"
	I1026 08:14:24.724048  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.965146881s)
	I1026 08:14:24.724082  296225 addons.go:479] Verifying addon ingress=true in "addons-178002"
	I1026 08:14:24.724336  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.407495397s)
	I1026 08:14:24.724450  296225 addons.go:479] Verifying addon registry=true in "addons-178002"
	I1026 08:14:24.724661  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.380457101s)
	W1026 08:14:24.724690  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:24.724706  296225 retry.go:31] will retry after 370.180227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:24.724757  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.195146622s)
	I1026 08:14:24.725063  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.691465899s)
	W1026 08:14:24.725393  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 08:14:24.725412  296225 retry.go:31] will retry after 341.188253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 08:14:24.727569  296225 out.go:179] * Verifying ingress addon...
	I1026 08:14:24.729743  296225 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-178002 service yakd-dashboard -n yakd-dashboard
	
	I1026 08:14:24.729850  296225 out.go:179] * Verifying registry addon...
	I1026 08:14:24.733635  296225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 08:14:24.734622  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 08:14:24.739062  296225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 08:14:24.739083  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:24.742494  296225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 08:14:24.742514  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:24.838601  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:25.018051  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.10572757s)
	I1026 08:14:25.018142  296225 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-178002"
	I1026 08:14:25.021547  296225 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 08:14:25.025343  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 08:14:25.029577  296225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 08:14:25.029598  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:25.066920  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:25.095482  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:25.238097  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:25.239144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:25.530403  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:25.739198  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:25.739315  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.028815  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:26.237386  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:26.237841  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.336273  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 08:14:26.336395  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:26.353827  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:26.471924  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 08:14:26.484372  296225 addons.go:238] Setting addon gcp-auth=true in "addons-178002"
	I1026 08:14:26.484418  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:26.484856  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:26.501901  296225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 08:14:26.501965  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:26.519450  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:26.529760  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:26.736785  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.737995  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:27.028759  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:27.237043  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:27.237409  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:27.335418  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:27.532185  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:27.739930  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:27.740522  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:27.829914  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.762940686s)
	I1026 08:14:27.830060  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.734504608s)
	W1026 08:14:27.830087  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:27.830111  296225 retry.go:31] will retry after 332.647592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:27.830171  296225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.328241877s)
	I1026 08:14:27.833471  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:27.836568  296225 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 08:14:27.839361  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 08:14:27.839399  296225 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 08:14:27.852859  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 08:14:27.852883  296225 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 08:14:27.865975  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 08:14:27.865999  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 08:14:27.879261  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 08:14:28.029073  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:28.163435  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:28.240144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:28.240453  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:28.468420  296225 addons.go:479] Verifying addon gcp-auth=true in "addons-178002"
	I1026 08:14:28.471553  296225 out.go:179] * Verifying gcp-auth addon...
	I1026 08:14:28.474514  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 08:14:28.480773  296225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 08:14:28.480839  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:28.529495  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:28.736923  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:28.738492  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:28.978184  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:29.029696  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:29.111324  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:29.111360  296225 retry.go:31] will retry after 798.198303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:29.237883  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:29.239200  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:29.478070  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:29.529080  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:29.737174  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:29.738159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:29.835250  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:29.910600  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:29.977695  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:30.030016  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:30.239329  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:30.240585  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:30.477981  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:30.528786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:30.738001  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:30.738786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:30.755784  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:30.755816  296225 retry.go:31] will retry after 1.020794769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:30.978027  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:31.029284  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:31.237999  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:31.238406  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:31.478427  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:31.528840  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:31.738617  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:31.739196  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:31.777327  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 08:14:31.835429  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:31.979995  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:32.031230  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:32.239361  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:32.239924  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:32.478748  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:32.529547  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:32.669558  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:32.669613  296225 retry.go:31] will retry after 1.109813752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:32.737241  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:32.737425  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:32.978894  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:33.029400  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:33.237873  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:33.238023  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:33.478021  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:33.529109  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:33.738427  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:33.738609  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:33.780495  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:33.978665  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:34.029511  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:34.238630  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:34.239621  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:34.335167  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:34.478755  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:34.529964  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:34.655609  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:34.655690  296225 retry.go:31] will retry after 1.76886346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:34.737404  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:34.738126  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:34.978675  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:35.029255  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:35.238445  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:35.238624  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:35.477710  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:35.529048  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:35.738356  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:35.738575  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:35.978608  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:36.029248  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:36.238211  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:36.238675  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:36.335824  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:36.425079  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:36.478406  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:36.528765  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:36.739054  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:36.739945  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:36.978047  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:37.030210  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:37.239621  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:37.240035  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:37.326293  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:37.326385  296225 retry.go:31] will retry after 4.147131696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:37.477647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:37.528893  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:37.738096  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:37.738996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:37.978115  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:38.030133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:38.237935  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:38.238038  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:38.478106  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:38.529576  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:38.737519  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:38.738396  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:38.835281  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:38.978686  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:39.029084  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:39.237194  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:39.237690  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:39.478026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:39.529501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:39.736648  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:39.737781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:39.977742  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:40.030501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:40.238463  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:40.238789  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:40.478201  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:40.529067  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:40.737120  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:40.737581  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:40.978508  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:41.029496  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:41.237606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:41.237779  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:41.334821  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:41.474236  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:41.478194  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:41.529640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:41.738257  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:41.738685  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:41.978346  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:42.035550  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:42.240339  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:42.240679  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:42.335734  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:42.335770  296225 retry.go:31] will retry after 4.780772563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:42.478781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:42.528798  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:42.737304  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:42.737835  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:42.977404  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:43.028435  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:43.236703  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:43.237210  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:43.337094  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:43.478461  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:43.529385  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:43.737615  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:43.737783  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:43.980153  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:44.029218  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:44.238857  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:44.239015  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:44.477752  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:44.528542  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:44.736761  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:44.737885  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:44.977547  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:45.033623  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:45.238873  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:45.239980  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:45.343562  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:45.477604  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:45.528784  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:45.737359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:45.737503  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:45.977676  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:46.028757  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:46.237240  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:46.237850  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:46.477849  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:46.528716  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:46.736798  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:46.737670  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:46.977362  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:47.028532  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:47.117676  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:47.251251  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:47.252439  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:47.477980  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:47.528953  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:47.738466  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:47.738594  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:47.834839  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	W1026 08:14:47.958087  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:47.958120  296225 retry.go:31] will retry after 3.448056655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:47.977982  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:48.028983  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:48.238251  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:48.238374  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:48.478384  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:48.529384  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:48.736910  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:48.737433  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:48.977645  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:49.028362  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:49.237698  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:49.238135  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:49.477793  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:49.528925  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:49.737269  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:49.737765  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:49.834972  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:49.977894  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:50.029119  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:50.237191  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:50.237731  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:50.477359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:50.529263  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:50.740190  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:50.740578  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:50.977374  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:51.028975  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:51.237376  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:51.237798  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:51.407086  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:51.478236  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:51.529623  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:51.738444  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:51.738574  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:51.977906  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:52.029150  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:52.239302  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:52.239430  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:52.243973  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:52.244003  296225 retry.go:31] will retry after 5.911071565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:14:52.334841  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:52.477781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:52.528801  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:52.737710  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:52.738016  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:52.978339  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:53.028224  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:53.237766  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:53.237816  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:53.477626  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:53.528731  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:53.737175  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:53.737471  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:53.978463  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:54.029028  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:54.237825  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:54.238061  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:54.335623  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:54.477555  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:54.528359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:54.738061  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:54.738172  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:54.977657  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:55.029002  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:55.237094  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:55.238162  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:55.477872  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:55.528861  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:55.737213  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:55.737420  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:55.977804  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:56.028662  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:56.237487  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:56.237645  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:56.477965  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:56.529077  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:56.737120  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:56.737454  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:56.835172  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:56.978032  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:57.028661  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:57.237682  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:57.237820  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:57.477451  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:57.529424  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:57.736950  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:57.737278  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:57.977953  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:58.029148  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:58.155294  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:58.239576  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:58.239941  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:58.478422  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:58.528923  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:58.737434  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:58.738069  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:58.835387  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	W1026 08:14:58.964117  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:58.964157  296225 retry.go:31] will retry after 7.777231146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:58.977919  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:59.028851  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:59.237415  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:59.238151  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:59.499041  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:59.560133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:59.740966  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:59.741353  296225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 08:14:59.741375  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:59.908671  296225 node_ready.go:49] node "addons-178002" is "Ready"
	I1026 08:14:59.908706  296225 node_ready.go:38] duration metric: took 39.577192311s for node "addons-178002" to be "Ready" ...
	I1026 08:14:59.908721  296225 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:14:59.908797  296225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:14:59.930818  296225 api_server.go:72] duration metric: took 41.589286991s to wait for apiserver process to appear ...
	I1026 08:14:59.930847  296225 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:14:59.930879  296225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:14:59.962270  296225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:14:59.964353  296225 api_server.go:141] control plane version: v1.34.1
	I1026 08:14:59.964381  296225 api_server.go:131] duration metric: took 33.515648ms to wait for apiserver health ...
	I1026 08:14:59.964390  296225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:14:59.997323  296225 system_pods.go:59] 19 kube-system pods found
	I1026 08:14:59.997428  296225 system_pods.go:61] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:14:59.997453  296225 system_pods.go:61] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending
	I1026 08:14:59.997504  296225 system_pods.go:61] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:14:59.997535  296225 system_pods.go:61] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:14:59.997558  296225 system_pods.go:61] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:14:59.997584  296225 system_pods.go:61] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:14:59.997614  296225 system_pods.go:61] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:14:59.997640  296225 system_pods.go:61] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:14:59.997671  296225 system_pods.go:61] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending
	I1026 08:14:59.997723  296225 system_pods.go:61] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:14:59.997832  296225 system_pods.go:61] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:14:59.997866  296225 system_pods.go:61] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:14:59.997891  296225 system_pods.go:61] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:14:59.997920  296225 system_pods.go:61] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:14:59.997950  296225 system_pods.go:61] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:14:59.998065  296225 system_pods.go:61] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:14:59.998102  296225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:14:59.998123  296225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending
	I1026 08:14:59.998151  296225 system_pods.go:61] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:14:59.998223  296225 system_pods.go:74] duration metric: took 33.817074ms to wait for pod list to return data ...
	I1026 08:14:59.998896  296225 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:14:59.998584  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:00.009297  296225 default_sa.go:45] found service account: "default"
	I1026 08:15:00.009395  296225 default_sa.go:55] duration metric: took 10.482339ms for default service account to be created ...
	I1026 08:15:00.009427  296225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:15:00.023374  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.023496  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.023522  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending
	I1026 08:15:00.023564  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.023595  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.023619  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.023643  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.023680  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.023709  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.023733  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending
	I1026 08:15:00.023754  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.023793  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.023824  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.023847  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:15:00.023869  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:15:00.023903  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:15:00.023935  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.023961  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.023986  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending
	I1026 08:15:00.024022  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.024067  296225 retry.go:31] will retry after 235.520543ms: missing components: kube-dns
	I1026 08:15:00.047537  296225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 08:15:00.047640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:00.284763  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:00.294231  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:00.301046  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.301148  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.301175  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:00.301218  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.301257  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.301279  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.301303  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.301336  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.301369  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.301398  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:00.301420  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.303763  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.303845  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.303868  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:15:00.303894  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:15:00.303925  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:15:00.303953  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.303979  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.304004  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.304050  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.304096  296225 retry.go:31] will retry after 357.430229ms: missing components: kube-dns
	I1026 08:15:00.498940  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:00.532605  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:00.682494  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.682599  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.682626  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:00.682670  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.682699  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.682783  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.682813  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.682840  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.682878  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.682906  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:00.682935  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.682969  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.682999  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.683024  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:00.683064  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:00.683093  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:00.683118  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.683154  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.683186  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.683214  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.683270  296225 retry.go:31] will retry after 414.669984ms: missing components: kube-dns
	I1026 08:15:00.749213  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:00.749311  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:00.979671  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:01.094151  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:01.118002  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:01.118100  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:01.118131  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:01.118172  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:01.118221  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:01.118259  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:01.118287  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:01.118311  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:01.118351  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:01.118380  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:01.118400  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:01.118442  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:01.118474  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:01.118519  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:01.118549  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:01.118572  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:01.118615  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:01.118646  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.118671  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.118737  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:01.118773  296225 retry.go:31] will retry after 529.848998ms: missing components: kube-dns
	I1026 08:15:01.239240  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:01.239681  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:01.490478  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:01.562652  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:01.655406  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:01.655445  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Running
	I1026 08:15:01.655456  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:01.655463  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:01.655471  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:01.655476  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:01.655481  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:01.655485  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:01.655490  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:01.655502  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:01.655506  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:01.655514  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:01.655521  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:01.655534  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:01.655541  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:01.655552  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:01.655558  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:01.655572  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.655579  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.655583  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Running
	I1026 08:15:01.655593  296225 system_pods.go:126] duration metric: took 1.646131177s to wait for k8s-apps to be running ...
	I1026 08:15:01.655607  296225 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:15:01.655669  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:15:01.677560  296225 system_svc.go:56] duration metric: took 21.942206ms WaitForService to wait for kubelet
	I1026 08:15:01.677588  296225 kubeadm.go:586] duration metric: took 43.336062516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:15:01.677607  296225 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:15:01.681937  296225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:15:01.681970  296225 node_conditions.go:123] node cpu capacity is 2
	I1026 08:15:01.681987  296225 node_conditions.go:105] duration metric: took 4.372283ms to run NodePressure ...
	I1026 08:15:01.682000  296225 start.go:241] waiting for startup goroutines ...
	I1026 08:15:01.738886  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:01.739068  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:01.981768  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:02.030030  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:02.239010  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:02.239816  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:02.478155  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:02.529769  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:02.739461  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:02.739674  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:02.978060  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:03.029837  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:03.240058  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:03.240268  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:03.479059  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:03.580223  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:03.738940  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:03.739257  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:03.978381  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:04.028704  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:04.238528  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:04.238705  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:04.477510  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:04.529111  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:04.739741  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:04.739906  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:04.978951  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:05.029594  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:05.241795  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:05.241987  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:05.477900  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:05.529352  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:05.738487  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:05.739039  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:05.977488  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:06.028977  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:06.238144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:06.239366  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:06.477344  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:06.528418  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:06.736565  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:06.738790  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:06.742045  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:15:06.977341  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:07.028814  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:07.246938  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:07.247208  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:07.478299  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:07.530021  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:07.739736  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:07.740458  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:07.807486  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065408891s)
	W1026 08:15:07.807519  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:07.807537  296225 retry.go:31] will retry after 18.446026225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:07.977536  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:08.029434  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:08.238521  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:08.239501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:08.478073  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:08.529996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:08.740095  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:08.740416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:08.977745  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:09.030513  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:09.238554  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:09.239805  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:09.477984  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:09.529900  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:09.736702  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:09.738704  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:09.977638  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:10.028964  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:10.236927  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:10.238903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:10.477613  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:10.528652  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:10.739659  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:10.741388  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:10.977693  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:11.030753  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:11.241218  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:11.247845  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:11.478571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:11.529499  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:11.739736  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:11.740168  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:11.978955  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:12.029052  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:12.241592  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:12.245300  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:12.478560  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:12.539694  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:12.738732  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:12.738907  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:12.978432  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:13.029589  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:13.239035  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:13.239284  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:13.478692  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:13.581127  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:13.737375  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:13.737971  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:13.978508  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:14.028997  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:14.238140  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:14.239347  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:14.477518  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:14.529302  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:14.740226  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:14.740652  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:14.978026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:15.084348  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:15.238285  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:15.238667  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:15.477787  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:15.579233  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:15.740241  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:15.740807  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:15.978328  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:16.028497  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:16.236652  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:16.239092  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:16.482132  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:16.529510  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:16.737057  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:16.737473  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:16.977689  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:17.029247  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:17.237753  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:17.238344  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:17.479206  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:17.529928  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:17.739053  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:17.739557  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:17.977552  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:18.030354  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:18.240778  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:18.242979  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:18.478459  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:18.528786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:18.739016  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:18.739063  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:18.977806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:19.029133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:19.236953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:19.239017  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:19.478866  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:19.528873  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:19.737336  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:19.737461  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:19.977754  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:20.030268  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:20.239874  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:20.245584  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:20.477884  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:20.531042  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:20.749220  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:20.749205  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:20.977640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:21.029806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:21.239440  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:21.239846  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:21.478919  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:21.580106  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:21.739822  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:21.740155  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:21.991418  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:22.092939  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:22.241392  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:22.241999  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:22.478530  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:22.528460  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:22.751813  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:22.752040  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:22.978454  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:23.029481  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:23.238299  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:23.238513  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:23.478074  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:23.529818  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:23.741256  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:23.741599  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:23.979924  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:24.029255  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:24.239008  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:24.239352  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:24.478355  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:24.528811  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:24.737242  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:24.739670  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:24.978998  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:25.030333  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:25.239795  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:25.239931  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:25.478424  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:25.528840  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:25.737953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:25.738862  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:25.978323  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:26.029709  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:26.236884  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:26.238368  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:26.254669  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:15:26.478501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:26.529422  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:26.738309  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:26.738928  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:26.978231  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:27.031663  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:27.239026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:27.240294  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:27.373066  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.118353536s)
	W1026 08:15:27.373107  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:27.373128  296225 retry.go:31] will retry after 42.725156939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:27.480356  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:27.528572  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:27.745519  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:27.746019  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:27.979363  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:28.029212  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:28.239500  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:28.240098  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:28.478186  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:28.529960  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:28.739200  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:28.739647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:28.978351  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:29.082888  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:29.239246  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:29.239769  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:29.478242  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:29.529520  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:29.743575  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:29.743862  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:29.978489  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:30.089803  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:30.238007  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:30.241514  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:30.477821  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:30.529159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:30.738984  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:30.739120  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:30.978083  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:31.029228  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:31.238656  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:31.238887  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:31.478903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:31.530158  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:31.741354  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:31.741518  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:31.977732  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:32.029407  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:32.239608  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:32.239750  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:32.480310  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:32.529240  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:32.737713  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:32.738155  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:32.978573  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:33.028733  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:33.237917  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:33.238402  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:33.479167  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:33.530158  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:33.737454  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:33.738573  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:33.978278  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:34.030057  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:34.239320  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:34.239640  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:34.477911  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:34.529572  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:34.739189  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:34.739900  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:34.978301  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:35.030535  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:35.238969  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:35.240235  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:35.479115  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:35.580164  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:35.738527  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:35.740351  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:35.977867  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:36.029155  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:36.238302  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:36.239667  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:36.477903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:36.529534  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:36.739331  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:36.740308  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:36.978908  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:37.031169  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:37.239073  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:37.240230  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:37.478802  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:37.529719  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:37.739390  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:37.740393  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:37.978469  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:38.029767  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:38.238026  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:38.238460  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:38.477530  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:38.529381  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:38.739023  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:38.739512  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:38.977996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:39.029904  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:39.238647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:39.238863  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:39.478470  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:39.529453  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:39.738904  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:39.739322  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:39.978826  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:40.037382  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:40.239813  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:40.239963  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:40.478472  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:40.528541  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:40.736352  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:40.738331  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:40.977471  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:41.028469  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:41.236734  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:41.239221  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:41.478839  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:41.529958  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:41.738995  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:41.739343  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:41.981562  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:42.034397  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:42.241726  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:42.242278  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:42.478312  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:42.532118  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:42.739161  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:42.739527  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:42.980332  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:43.030903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:43.240494  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:43.240825  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:43.478332  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:43.529926  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:43.737641  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:43.738181  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:43.978159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:44.030139  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:44.236944  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:44.239059  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:44.478442  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:44.531043  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:44.739417  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:44.739554  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:44.977650  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:45.048679  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:45.240688  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:45.240752  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:45.478325  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:45.528609  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:45.737957  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:45.738136  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:45.978681  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:46.028977  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:46.238105  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:46.238954  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:46.477981  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:46.529375  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:46.739376  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:46.740368  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:46.978687  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:47.029584  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:47.238019  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:47.238843  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:47.477925  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:47.529323  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:47.742434  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:47.742608  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:47.977843  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:48.033250  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:48.237838  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:48.237956  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:48.478492  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:48.529399  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:48.741188  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:48.741383  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:48.977968  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:49.030444  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:49.239290  296225 kapi.go:107] duration metric: took 1m24.504662711s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 08:15:49.239692  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:49.478340  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:49.529337  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:49.738705  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:49.978206  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:50.030222  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:50.238262  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:50.478610  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:50.529017  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:50.738217  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:50.994806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:51.032938  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:51.240338  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:51.477970  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:51.530471  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:51.737343  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:51.982965  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:52.030162  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:52.237407  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:52.477791  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:52.528830  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:52.738339  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:52.979635  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:53.029539  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:53.238402  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:53.478683  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:53.531118  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:53.741117  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:53.980074  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:54.030207  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:54.238096  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:54.478571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:54.528571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:54.741273  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:54.978684  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:55.030119  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:55.239146  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:55.479491  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:55.529142  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:55.740663  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:55.977960  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:56.029367  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:56.236953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:56.477729  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:56.529188  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:56.737379  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:56.978566  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:57.029258  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:57.237450  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:57.477606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:57.529496  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:57.738310  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:57.978759  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:58.029633  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:58.236914  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:58.477809  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:58.529218  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:58.763626  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:58.978365  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:59.028546  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:59.237339  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:59.477687  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:59.529451  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:59.743372  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:59.984010  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:00.128558  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:00.242110  296225 kapi.go:107] duration metric: took 1m35.50846925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 08:16:00.480410  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:00.530349  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:00.977490  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:01.030072  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:01.477624  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:01.529509  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:01.978001  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:02.029935  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:02.478416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:02.528483  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:02.979617  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:03.030041  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:03.478674  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:03.529578  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:03.978606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:04.029768  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:04.478464  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:04.529835  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:04.978692  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:05.029455  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:05.477913  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:05.529578  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:05.978412  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:06.081661  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:06.477499  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:06.529088  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:06.978066  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:07.029273  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:07.477585  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:07.529028  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:07.978272  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:08.038879  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:08.479159  296225 kapi.go:107] duration metric: took 1m40.004640372s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 08:16:08.482596  296225 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-178002 cluster.
	I1026 08:16:08.485636  296225 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 08:16:08.488522  296225 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 08:16:08.530127  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:09.029775  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:09.530416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:10.030707  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:10.099063  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:16:10.529927  296225 kapi.go:107] duration metric: took 1m45.504581847s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W1026 08:16:10.944054  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:16:10.944156  296225 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 08:16:10.947965  296225 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, default-storageclass, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 08:16:10.951223  296225 addons.go:514] duration metric: took 1m52.609271345s for enable addons: enabled=[nvidia-device-plugin registry-creds default-storageclass amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 08:16:10.951288  296225 start.go:246] waiting for cluster config update ...
	I1026 08:16:10.951315  296225 start.go:255] writing updated cluster config ...
	I1026 08:16:10.951623  296225 ssh_runner.go:195] Run: rm -f paused
	I1026 08:16:10.958578  296225 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:16:10.971159  296225 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hbh8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:10.994934  296225 pod_ready.go:94] pod "coredns-66bc5c9577-hbh8d" is "Ready"
	I1026 08:16:10.995006  296225 pod_ready.go:86] duration metric: took 23.815542ms for pod "coredns-66bc5c9577-hbh8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:10.998128  296225 pod_ready.go:83] waiting for pod "etcd-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.009222  296225 pod_ready.go:94] pod "etcd-addons-178002" is "Ready"
	I1026 08:16:11.009260  296225 pod_ready.go:86] duration metric: took 11.105166ms for pod "etcd-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.099128  296225 pod_ready.go:83] waiting for pod "kube-apiserver-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.104813  296225 pod_ready.go:94] pod "kube-apiserver-addons-178002" is "Ready"
	I1026 08:16:11.104842  296225 pod_ready.go:86] duration metric: took 5.682497ms for pod "kube-apiserver-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.107608  296225 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.363419  296225 pod_ready.go:94] pod "kube-controller-manager-addons-178002" is "Ready"
	I1026 08:16:11.363450  296225 pod_ready.go:86] duration metric: took 255.813417ms for pod "kube-controller-manager-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.562828  296225 pod_ready.go:83] waiting for pod "kube-proxy-s87tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.962952  296225 pod_ready.go:94] pod "kube-proxy-s87tq" is "Ready"
	I1026 08:16:11.963034  296225 pod_ready.go:86] duration metric: took 400.176991ms for pod "kube-proxy-s87tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.164606  296225 pod_ready.go:83] waiting for pod "kube-scheduler-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.563214  296225 pod_ready.go:94] pod "kube-scheduler-addons-178002" is "Ready"
	I1026 08:16:12.563241  296225 pod_ready.go:86] duration metric: took 398.607264ms for pod "kube-scheduler-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.563255  296225 pod_ready.go:40] duration metric: took 1.604603548s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:16:12.618064  296225 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 08:16:12.621306  296225 out.go:179] * Done! kubectl is now configured to use "addons-178002" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 08:19:12 addons-178002 crio[832]: time="2025-10-26T08:19:12.823553833Z" level=info msg="Removed container 96ee442f595ac0c462f439377ab38efef3d375e067f93bbbcb349e823ab9e902: kube-system/registry-creds-764b6fb674-c4cvz/registry-creds" id=595d375f-9916-4ae0-81f1-9e62ef0e68fd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.809555302Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-ds5lc/POD" id=a26f7559-f999-4922-b01d-2d0a4f9dfefb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.809624357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.824390415Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ds5lc Namespace:default ID:cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811 UID:0d400630-acb4-4d6d-926f-5c39648fc954 NetNS:/var/run/netns/cb96e5fc-03f8-45c6-b39f-869946e89582 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013ba570}] Aliases:map[]}"
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.824441887Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-ds5lc to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.841498328Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ds5lc Namespace:default ID:cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811 UID:0d400630-acb4-4d6d-926f-5c39648fc954 NetNS:/var/run/netns/cb96e5fc-03f8-45c6-b39f-869946e89582 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013ba570}] Aliases:map[]}"
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.841688435Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-ds5lc for CNI network kindnet (type=ptp)"
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.854587696Z" level=info msg="Ran pod sandbox cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811 with infra container: default/hello-world-app-5d498dc89-ds5lc/POD" id=a26f7559-f999-4922-b01d-2d0a4f9dfefb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.856031226Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f99172ef-f519-4087-a454-8a6e56bc1e19 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.856160926Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=f99172ef-f519-4087-a454-8a6e56bc1e19 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.856203692Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=f99172ef-f519-4087-a454-8a6e56bc1e19 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.859011907Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=dea2ea92-6e81-4553-b97e-d85432e447bb name=/runtime.v1.ImageService/PullImage
	Oct 26 08:19:14 addons-178002 crio[832]: time="2025-10-26T08:19:14.86023639Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.482997304Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=dea2ea92-6e81-4553-b97e-d85432e447bb name=/runtime.v1.ImageService/PullImage
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.487238021Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6f63c8ab-5d49-4872-a15c-442afc128e35 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.488929127Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7f52d91d-7dd9-452d-96f2-bccd3b760398 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.497906313Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ds5lc/hello-world-app" id=92de7a9c-bf15-415a-9286-9e1b528b7a30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.498109762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.505356022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.505550527Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c473ac9acaf0726909bbf14d51cfa566c1d748116647d41aed9028736596ae08/merged/etc/passwd: no such file or directory"
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.505572968Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c473ac9acaf0726909bbf14d51cfa566c1d748116647d41aed9028736596ae08/merged/etc/group: no such file or directory"
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.505845858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.527327035Z" level=info msg="Created container ac899fbbbb9082a1cd1f4cc1a28c463d81061aaf762b4f250175169f684646b7: default/hello-world-app-5d498dc89-ds5lc/hello-world-app" id=92de7a9c-bf15-415a-9286-9e1b528b7a30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.528318818Z" level=info msg="Starting container: ac899fbbbb9082a1cd1f4cc1a28c463d81061aaf762b4f250175169f684646b7" id=7c63780d-2958-4d57-8f8b-aa6f6039ce6d name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:19:15 addons-178002 crio[832]: time="2025-10-26T08:19:15.532627376Z" level=info msg="Started container" PID=7219 containerID=ac899fbbbb9082a1cd1f4cc1a28c463d81061aaf762b4f250175169f684646b7 description=default/hello-world-app-5d498dc89-ds5lc/hello-world-app id=7c63780d-2958-4d57-8f8b-aa6f6039ce6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ac899fbbbb908       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   cf2e248d7ee32       hello-world-app-5d498dc89-ds5lc             default
	6d38c87f42a71       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             4 seconds ago            Exited              registry-creds                           1                   baef8a3ab303d       registry-creds-764b6fb674-c4cvz             kube-system
	2f0805b5cd2af       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   456e1a17f0af3       nginx                                       default
	5dd800b50cf6e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   4118ae881c435       busybox                                     default
	656a5504f6140       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	c400235dacf6d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   0c83150fbe92a       gcp-auth-78565c9fb4-4bzxg                   gcp-auth
	6e68b380d42de       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	cb1293525905b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	337cf8aa6fc1e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	4bfbc1f9f76f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	b40d381cbc670       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   7fd4481cf3cfd       ingress-nginx-controller-675c5ddd98-jfslq   ingress-nginx
	11cc83ea16cf9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   02cbd878753c5       gadget-fpvwp                                gadget
	11f21105b6321       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   822b63ef6af37       registry-proxy-n9gsn                        kube-system
	9ac72e95bdbb9       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   cdc504816b9c8       csi-hostpath-attacher-0                     kube-system
	e01421ba1d79e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	7cb1110433e18       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   42636bb7ea5d8       kube-ingress-dns-minikube                   kube-system
	93c19aa863bec       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   fadfe17ca2102       ingress-nginx-admission-patch-9d9jx         ingress-nginx
	90f02169324ed       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   1cf41d655b554       yakd-dashboard-5ff678cb9-pr2hf              yakd-dashboard
	6f48f953a8791       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   99d67686ac378       registry-6b586f9694-t9spk                   kube-system
	ff820531f07c3       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   9e9ca2665b361       cloud-spanner-emulator-86bd5cbb97-kbp57     default
	29c4bdd09e074       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   1c5bffbc201d9       ingress-nginx-admission-create-thdtm        ingress-nginx
	0c22b9c32c7f1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   8f521adb78053       local-path-provisioner-648f6765c9-ftt78     local-path-storage
	2d44eec32cccd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   4406843563bb0       snapshot-controller-7d9fbc56b8-b6xk7        kube-system
	293368e4d2e35       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   eea3a34db48e7       metrics-server-85b7d694d7-bgt5w             kube-system
	b5798323fc825       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   8288897b29078       csi-hostpath-resizer-0                      kube-system
	610b6f1646fb9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   1622d406430d5       snapshot-controller-7d9fbc56b8-2ppj6        kube-system
	3289a391ffd5d       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   34fbc96fd08e6       nvidia-device-plugin-daemonset-b6795        kube-system
	55db9ad7dfb08       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   94ddfeaf4fb61       coredns-66bc5c9577-hbh8d                    kube-system
	c3689b3808378       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   6e62dce330317       storage-provisioner                         kube-system
	8c52f0a3eb944       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   8975e7eced745       kube-proxy-s87tq                            kube-system
	ed2c281df9eab       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   0a9940c9b31ec       kindnet-bmsbv                               kube-system
	e28a155094997       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   1be1e494e8772       kube-scheduler-addons-178002                kube-system
	a8be4f8cce6ed       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   a4e9f4835f54f       kube-controller-manager-addons-178002       kube-system
	a0394733465ef       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   f8cc8b9325162       kube-apiserver-addons-178002                kube-system
	6bd1c5cde2562       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   d03a17fc541b7       etcd-addons-178002                          kube-system
	
	
	==> coredns [55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794] <==
	[INFO] 10.244.0.7:47496 - 2164 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002043598s
	[INFO] 10.244.0.7:47496 - 55186 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000137675s
	[INFO] 10.244.0.7:47496 - 8160 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000089166s
	[INFO] 10.244.0.7:59374 - 26720 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178193s
	[INFO] 10.244.0.7:59374 - 26498 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071878s
	[INFO] 10.244.0.7:51433 - 19960 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106848s
	[INFO] 10.244.0.7:51433 - 19516 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000195883s
	[INFO] 10.244.0.7:48015 - 10753 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108063s
	[INFO] 10.244.0.7:48015 - 10556 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083069s
	[INFO] 10.244.0.7:43807 - 40751 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001356264s
	[INFO] 10.244.0.7:43807 - 40964 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001527959s
	[INFO] 10.244.0.7:54908 - 64161 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158984s
	[INFO] 10.244.0.7:54908 - 63972 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076522s
	[INFO] 10.244.0.21:59030 - 11727 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182632s
	[INFO] 10.244.0.21:34008 - 33059 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000129634s
	[INFO] 10.244.0.21:45650 - 17774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163596s
	[INFO] 10.244.0.21:44278 - 47069 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155506s
	[INFO] 10.244.0.21:34621 - 31222 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00020312s
	[INFO] 10.244.0.21:37742 - 62714 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000206993s
	[INFO] 10.244.0.21:52412 - 26371 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001902305s
	[INFO] 10.244.0.21:40266 - 55033 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002141175s
	[INFO] 10.244.0.21:35362 - 21519 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002385863s
	[INFO] 10.244.0.21:36492 - 4485 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002826549s
	[INFO] 10.244.0.24:46331 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000323442s
	[INFO] 10.244.0.24:55053 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129946s
	
	
	==> describe nodes <==
	Name:               addons-178002
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-178002
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=addons-178002
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_14_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-178002
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-178002"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:14:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-178002
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:19:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:19:09 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:19:09 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:19:09 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:19:09 +0000   Sun, 26 Oct 2025 08:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-178002
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                22814fea-4664-4b05-819b-2c2b8600c797
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     cloud-spanner-emulator-86bd5cbb97-kbp57      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  default                     hello-world-app-5d498dc89-ds5lc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-fpvwp                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-4bzxg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jfslq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m52s
	  kube-system                 coredns-66bc5c9577-hbh8d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m58s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-zbhlb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 etcd-addons-178002                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m4s
	  kube-system                 kindnet-bmsbv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-178002                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-178002        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-s87tq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-178002                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 metrics-server-85b7d694d7-bgt5w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m53s
	  kube-system                 nvidia-device-plugin-daemonset-b6795         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 registry-6b586f9694-t9spk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-creds-764b6fb674-c4cvz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-proxy-n9gsn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-2ppj6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-b6xk7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  local-path-storage          local-path-provisioner-648f6765c9-ftt78      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pr2hf               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m56s                  kube-proxy       
	  Normal   Starting                 5m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node addons-178002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node addons-178002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m11s (x8 over 5m11s)  kubelet          Node addons-178002 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s                   kubelet          Node addons-178002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                   kubelet          Node addons-178002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                   kubelet          Node addons-178002 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s                  node-controller  Node addons-178002 event: Registered Node addons-178002 in Controller
	  Normal   NodeReady                4m17s                  kubelet          Node addons-178002 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014214] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501900] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033459] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.999923] kauditd_printk_skb: 36 callbacks suppressed
	[Oct26 08:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 08:14] overlayfs: idmapped layers are currently not supported
	[  +0.063904] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f] <==
	{"level":"warn","ts":"2025-10-26T08:14:08.415475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.439975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.474530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.522813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.575074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.597207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.642199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.679834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.716176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.767027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.781302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.825238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.879325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.911720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.935079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.978812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.003323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.022838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.119825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:25.441501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:25.459257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.190705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.224137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.254292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.272544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49262","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c400235dacf6d8d71200444627a71086d5212c6c6fe543d1b5f0bb91ca5d6b2b] <==
	2025/10/26 08:16:07 GCP Auth Webhook started!
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	2025/10/26 08:16:34 Ready to marshal response ...
	2025/10/26 08:16:34 Ready to write response ...
	2025/10/26 08:16:36 Ready to marshal response ...
	2025/10/26 08:16:36 Ready to write response ...
	2025/10/26 08:16:54 Ready to marshal response ...
	2025/10/26 08:16:54 Ready to write response ...
	2025/10/26 08:17:06 Ready to marshal response ...
	2025/10/26 08:17:06 Ready to write response ...
	2025/10/26 08:17:28 Ready to marshal response ...
	2025/10/26 08:17:28 Ready to write response ...
	2025/10/26 08:17:28 Ready to marshal response ...
	2025/10/26 08:17:28 Ready to write response ...
	2025/10/26 08:17:36 Ready to marshal response ...
	2025/10/26 08:17:36 Ready to write response ...
	2025/10/26 08:19:14 Ready to marshal response ...
	2025/10/26 08:19:14 Ready to write response ...
	
	
	==> kernel <==
	 08:19:16 up  2:01,  0 user,  load average: 0.44, 2.08, 3.14
	Linux addons-178002 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d] <==
	I1026 08:17:09.111278       1 main.go:301] handling current node
	I1026 08:17:19.112097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:17:19.112131       1 main.go:301] handling current node
	I1026 08:17:29.111909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:17:29.111943       1 main.go:301] handling current node
	I1026 08:17:39.111435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:17:39.111492       1 main.go:301] handling current node
	I1026 08:17:49.111256       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:17:49.111304       1 main.go:301] handling current node
	I1026 08:17:59.111639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:17:59.111772       1 main.go:301] handling current node
	I1026 08:18:09.118817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:09.118865       1 main.go:301] handling current node
	I1026 08:18:19.118782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:19.118870       1 main.go:301] handling current node
	I1026 08:18:29.116352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:29.116385       1 main.go:301] handling current node
	I1026 08:18:39.119219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:39.119353       1 main.go:301] handling current node
	I1026 08:18:49.118802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:49.118837       1 main.go:301] handling current node
	I1026 08:18:59.118027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:18:59.118065       1 main.go:301] handling current node
	I1026 08:19:09.118804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:19:09.119011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d] <==
	I1026 08:14:28.274774       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.111.97"}
	W1026 08:14:47.184681       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.214835       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.254184       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.269608       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:59.537397       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.537449       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	W1026 08:14:59.537914       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.537949       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	W1026 08:14:59.626864       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.626912       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	E1026 08:15:22.722067       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.151.121:443: connect: connection refused" logger="UnhandledError"
	W1026 08:15:22.722379       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 08:15:22.722485       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 08:15:22.766586       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 08:16:23.083736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45008: use of closed network connection
	E1026 08:16:23.352171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45026: use of closed network connection
	E1026 08:16:23.506149       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45056: use of closed network connection
	I1026 08:16:44.523601       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 08:16:54.692534       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 08:16:55.051235       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.224.129"}
	I1026 08:19:14.662024       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.205.177"}
	
	
	==> kube-controller-manager [a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56] <==
	I1026 08:14:17.169510       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:14:17.169526       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:14:17.175709       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:14:17.205167       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 08:14:17.205304       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:14:17.205374       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:14:17.205405       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 08:14:17.205448       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:14:17.208624       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 08:14:17.209321       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:14:17.210857       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:14:17.211011       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:14:17.214473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:14:17.215421       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-178002" podCIDRs=["10.244.0.0/24"]
	E1026 08:14:23.472775       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 08:14:47.173543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 08:14:47.173702       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 08:14:47.173748       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 08:14:47.228591       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 08:14:47.234944       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 08:14:47.273859       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:14:48.335790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:15:02.167964       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1026 08:15:17.280205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 08:15:18.347539       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1] <==
	I1026 08:14:20.150608       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:14:20.280417       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:14:20.398289       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:14:20.398319       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:14:20.398398       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:14:20.475729       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:14:20.475781       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:14:20.491720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:14:20.492160       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:14:20.492182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:14:20.493545       1 config.go:200] "Starting service config controller"
	I1026 08:14:20.493566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:14:20.493582       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:14:20.493587       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:14:20.493597       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:14:20.493607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:14:20.494276       1 config.go:309] "Starting node config controller"
	I1026 08:14:20.494290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:14:20.494296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:14:20.594114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:14:20.594155       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:14:20.594188       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557] <==
	E1026 08:14:10.284512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:14:10.284613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:14:10.284681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:14:10.285165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:14:10.285238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:14:10.285289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:14:10.285333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:14:10.285388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:14:10.285430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:14:10.285493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:14:10.285540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:14:10.285586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:14:10.285609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:14:10.291179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:14:10.291538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:14:10.291644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:14:10.291734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:14:11.121865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:14:11.190574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:14:11.251077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:14:11.276694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:14:11.306600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:14:11.382166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:14:11.760688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 08:14:14.844478       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:17:38 addons-178002 kubelet[1283]: I1026 08:17:38.658237    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e63e534-4202-4484-8ee3-9980f7f5a888-kube-api-access-sb7gc" (OuterVolumeSpecName: "kube-api-access-sb7gc") pod "5e63e534-4202-4484-8ee3-9980f7f5a888" (UID: "5e63e534-4202-4484-8ee3-9980f7f5a888"). InnerVolumeSpecName "kube-api-access-sb7gc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 08:17:38 addons-178002 kubelet[1283]: I1026 08:17:38.752715    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sb7gc\" (UniqueName: \"kubernetes.io/projected/5e63e534-4202-4484-8ee3-9980f7f5a888-kube-api-access-sb7gc\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:17:38 addons-178002 kubelet[1283]: I1026 08:17:38.752756    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e63e534-4202-4484-8ee3-9980f7f5a888-gcp-creds\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:17:38 addons-178002 kubelet[1283]: I1026 08:17:38.752768    1283 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5e63e534-4202-4484-8ee3-9980f7f5a888-script\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:17:38 addons-178002 kubelet[1283]: I1026 08:17:38.752781    1283 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5e63e534-4202-4484-8ee3-9980f7f5a888-data\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:17:39 addons-178002 kubelet[1283]: I1026 08:17:39.159829    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e63e534-4202-4484-8ee3-9980f7f5a888" path="/var/lib/kubelet/pods/5e63e534-4202-4484-8ee3-9980f7f5a888/volumes"
	Oct 26 08:17:39 addons-178002 kubelet[1283]: I1026 08:17:39.482288    1283 scope.go:117] "RemoveContainer" containerID="a803ad42a723d2af3cc43dd4a955c7c619f06142c8eb09bd22cfde4ea2006a71"
	Oct 26 08:18:13 addons-178002 kubelet[1283]: I1026 08:18:13.191829    1283 scope.go:117] "RemoveContainer" containerID="e11cf40ec77c69d9b230e28b92d27489347fbf9ad96292e41a82a7fd50ab9fe4"
	Oct 26 08:18:13 addons-178002 kubelet[1283]: I1026 08:18:13.203284    1283 scope.go:117] "RemoveContainer" containerID="8e5682cafba3cc7d2484bc6b0e751d925e370346e65a68b16addd3a95b4bbeab"
	Oct 26 08:18:17 addons-178002 kubelet[1283]: I1026 08:18:17.157989    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-t9spk" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:18:20 addons-178002 kubelet[1283]: I1026 08:18:20.156920    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-n9gsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:18:50 addons-178002 kubelet[1283]: I1026 08:18:50.157190    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b6795" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:19:09 addons-178002 kubelet[1283]: I1026 08:19:09.961674    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c4cvz" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:19:11 addons-178002 kubelet[1283]: I1026 08:19:11.803950    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c4cvz" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:19:11 addons-178002 kubelet[1283]: I1026 08:19:11.804010    1283 scope.go:117] "RemoveContainer" containerID="96ee442f595ac0c462f439377ab38efef3d375e067f93bbbcb349e823ab9e902"
	Oct 26 08:19:12 addons-178002 kubelet[1283]: I1026 08:19:12.809220    1283 scope.go:117] "RemoveContainer" containerID="96ee442f595ac0c462f439377ab38efef3d375e067f93bbbcb349e823ab9e902"
	Oct 26 08:19:12 addons-178002 kubelet[1283]: I1026 08:19:12.809553    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c4cvz" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:19:12 addons-178002 kubelet[1283]: I1026 08:19:12.809592    1283 scope.go:117] "RemoveContainer" containerID="6d38c87f42a71a4aa50f687d5613199d4108a4ec6dfc2769f4ffc62e07417459"
	Oct 26 08:19:12 addons-178002 kubelet[1283]: E1026 08:19:12.809767    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-c4cvz_kube-system(d49d499e-2d32-44e8-8b7d-61e797375c41)\"" pod="kube-system/registry-creds-764b6fb674-c4cvz" podUID="d49d499e-2d32-44e8-8b7d-61e797375c41"
	Oct 26 08:19:13 addons-178002 kubelet[1283]: I1026 08:19:13.814202    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c4cvz" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:19:13 addons-178002 kubelet[1283]: I1026 08:19:13.814261    1283 scope.go:117] "RemoveContainer" containerID="6d38c87f42a71a4aa50f687d5613199d4108a4ec6dfc2769f4ffc62e07417459"
	Oct 26 08:19:13 addons-178002 kubelet[1283]: E1026 08:19:13.814411    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-c4cvz_kube-system(d49d499e-2d32-44e8-8b7d-61e797375c41)\"" pod="kube-system/registry-creds-764b6fb674-c4cvz" podUID="d49d499e-2d32-44e8-8b7d-61e797375c41"
	Oct 26 08:19:14 addons-178002 kubelet[1283]: I1026 08:19:14.548698    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmtm\" (UniqueName: \"kubernetes.io/projected/0d400630-acb4-4d6d-926f-5c39648fc954-kube-api-access-jwmtm\") pod \"hello-world-app-5d498dc89-ds5lc\" (UID: \"0d400630-acb4-4d6d-926f-5c39648fc954\") " pod="default/hello-world-app-5d498dc89-ds5lc"
	Oct 26 08:19:14 addons-178002 kubelet[1283]: I1026 08:19:14.548966    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0d400630-acb4-4d6d-926f-5c39648fc954-gcp-creds\") pod \"hello-world-app-5d498dc89-ds5lc\" (UID: \"0d400630-acb4-4d6d-926f-5c39648fc954\") " pod="default/hello-world-app-5d498dc89-ds5lc"
	Oct 26 08:19:14 addons-178002 kubelet[1283]: W1026 08:19:14.852369    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/crio-cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811 WatchSource:0}: Error finding container cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811: Status 404 returned error can't find the container with id cf2e248d7ee32cab74350614ecc88de06921f6b93b4625840364eb9186607811
	
	
	==> storage-provisioner [c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7] <==
	W1026 08:18:52.535666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:54.538663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:54.543255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:56.546184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:56.550327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:58.553384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:18:58.559469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:00.562836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:00.567687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:02.571671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:02.578645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:04.581302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:04.586171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:06.589543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:06.594359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:08.600603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:08.605567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:10.609680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:10.616037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:12.619812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:12.625267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:14.634487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:14.640786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:16.643522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:19:16.648588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-178002 -n addons-178002
helpers_test.go:269: (dbg) Run:  kubectl --context addons-178002 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx: exit status 1 (81.095022ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-thdtm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9d9jx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (287.515112ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:19:17.780431  305842 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:19:17.781314  305842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:17.781330  305842 out.go:374] Setting ErrFile to fd 2...
	I1026 08:19:17.781337  305842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:17.781633  305842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:19:17.781991  305842 mustload.go:65] Loading cluster: addons-178002
	I1026 08:19:17.782396  305842 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:19:17.782415  305842 addons.go:606] checking whether the cluster is paused
	I1026 08:19:17.782559  305842 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:19:17.782579  305842 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:19:17.783089  305842 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:19:17.803933  305842 ssh_runner.go:195] Run: systemctl --version
	I1026 08:19:17.803989  305842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:19:17.828348  305842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:19:17.945279  305842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:19:17.945371  305842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:19:17.977714  305842 cri.go:89] found id: "6d38c87f42a71a4aa50f687d5613199d4108a4ec6dfc2769f4ffc62e07417459"
	I1026 08:19:17.977784  305842 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:19:17.977805  305842 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:19:17.977825  305842 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:19:17.977852  305842 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:19:17.977879  305842 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:19:17.977897  305842 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:19:17.977917  305842 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:19:17.977935  305842 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:19:17.977968  305842 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:19:17.977987  305842 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:19:17.978007  305842 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:19:17.978026  305842 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:19:17.978062  305842 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:19:17.978083  305842 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:19:17.978117  305842 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:19:17.978167  305842 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:19:17.978190  305842 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:19:17.978211  305842 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:19:17.978228  305842 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:19:17.978266  305842 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:19:17.978286  305842 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:19:17.978310  305842 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:19:17.978337  305842 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:19:17.978360  305842 cri.go:89] found id: ""
	I1026 08:19:17.978437  305842 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:19:17.996124  305842 out.go:203] 
	W1026 08:19:17.999136  305842 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:19:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:19:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:19:17.999240  305842 out.go:285] * 
	* 
	W1026 08:19:18.006590  305842 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:19:18.014023  305842 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable ingress --alsologtostderr -v=1: exit status 11 (279.128714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:19:18.073969  305954 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:19:18.080181  305954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:18.080244  305954 out.go:374] Setting ErrFile to fd 2...
	I1026 08:19:18.080269  305954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:18.080595  305954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:19:18.080965  305954 mustload.go:65] Loading cluster: addons-178002
	I1026 08:19:18.081377  305954 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:19:18.081411  305954 addons.go:606] checking whether the cluster is paused
	I1026 08:19:18.081541  305954 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:19:18.081566  305954 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:19:18.082058  305954 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:19:18.107071  305954 ssh_runner.go:195] Run: systemctl --version
	I1026 08:19:18.107140  305954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:19:18.127111  305954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:19:18.233828  305954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:19:18.233942  305954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:19:18.267480  305954 cri.go:89] found id: "6d38c87f42a71a4aa50f687d5613199d4108a4ec6dfc2769f4ffc62e07417459"
	I1026 08:19:18.267503  305954 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:19:18.267508  305954 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:19:18.267517  305954 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:19:18.267524  305954 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:19:18.267528  305954 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:19:18.267531  305954 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:19:18.267535  305954 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:19:18.267537  305954 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:19:18.267543  305954 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:19:18.267547  305954 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:19:18.267550  305954 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:19:18.267552  305954 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:19:18.267556  305954 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:19:18.267559  305954 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:19:18.267563  305954 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:19:18.267571  305954 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:19:18.267575  305954 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:19:18.267578  305954 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:19:18.267581  305954 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:19:18.267586  305954 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:19:18.267592  305954 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:19:18.267595  305954 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:19:18.267599  305954 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:19:18.267602  305954 cri.go:89] found id: ""
	I1026 08:19:18.267653  305954 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:19:18.282898  305954 out.go:203] 
	W1026 08:19:18.285795  305954 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:19:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:19:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:19:18.285824  305954 out.go:285] * 
	* 
	W1026 08:19:18.292309  305954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:19:18.295295  305954 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fpvwp" [b0fc56de-f408-4546-abe7-f59a43de7c6c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003264143s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (270.219946ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:54.153984  303517 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:54.154803  303517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:54.154818  303517 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:54.154824  303517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:54.155190  303517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:54.155516  303517 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:54.155950  303517 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:54.155974  303517 addons.go:606] checking whether the cluster is paused
	I1026 08:16:54.156096  303517 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:54.156121  303517 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:54.156592  303517 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:54.175385  303517 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:54.175448  303517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:54.200108  303517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:54.309603  303517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:54.309720  303517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:54.343850  303517 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:54.343874  303517 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:54.343879  303517 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:54.343883  303517 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:54.343887  303517 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:54.343891  303517 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:54.343894  303517 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:54.343897  303517 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:54.343901  303517 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:54.343907  303517 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:54.343911  303517 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:54.343913  303517 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:54.343917  303517 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:54.343920  303517 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:54.343924  303517 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:54.343934  303517 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:54.343941  303517 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:54.343945  303517 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:54.343949  303517 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:54.343952  303517 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:54.343957  303517 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:54.343960  303517 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:54.343963  303517 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:54.343966  303517 cri.go:89] found id: ""
	I1026 08:16:54.344016  303517 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:54.359868  303517 out.go:203] 
	W1026 08:16:54.362813  303517 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:54.362841  303517 out.go:285] * 
	* 
	W1026 08:16:54.369226  303517 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:54.372076  303517 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.659346ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003420199s
addons_test.go:463: (dbg) Run:  kubectl --context addons-178002 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (267.549094ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:47.884922  303414 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:47.885709  303414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:47.885725  303414 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:47.885730  303414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:47.886066  303414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:47.886402  303414 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:47.886860  303414 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:47.886882  303414 addons.go:606] checking whether the cluster is paused
	I1026 08:16:47.887022  303414 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:47.887042  303414 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:47.887532  303414 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:47.904139  303414 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:47.904205  303414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:47.925160  303414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:48.029709  303414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:48.029808  303414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:48.062919  303414 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:48.062945  303414 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:48.062951  303414 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:48.062955  303414 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:48.062958  303414 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:48.062962  303414 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:48.062966  303414 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:48.062969  303414 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:48.062973  303414 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:48.062981  303414 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:48.062984  303414 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:48.062988  303414 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:48.062991  303414 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:48.062995  303414 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:48.062999  303414 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:48.063011  303414 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:48.063020  303414 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:48.063026  303414 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:48.063029  303414 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:48.063033  303414 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:48.063038  303414 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:48.063042  303414 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:48.063045  303414 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:48.063048  303414 cri.go:89] found id: ""
	I1026 08:16:48.063100  303414 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:48.084421  303414 out.go:203] 
	W1026 08:16:48.087435  303414 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:48.087464  303414 out.go:285] * 
	* 
	W1026 08:16:48.093881  303414 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:48.096809  303414 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 08:16:26.980541  295475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 08:16:26.984293  295475 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 08:16:26.984324  295475 kapi.go:107] duration metric: took 3.811241ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.822474ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-178002 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-178002 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [33ffa5da-3ff4-47aa-8b19-2f206ca66416] Pending
helpers_test.go:352: "task-pv-pod" [33ffa5da-3ff4-47aa-8b19-2f206ca66416] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [33ffa5da-3ff4-47aa-8b19-2f206ca66416] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00385252s
addons_test.go:572: (dbg) Run:  kubectl --context addons-178002 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-178002 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-178002 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-178002 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-178002 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-178002 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-178002 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [935f4da3-7359-4e99-9670-966331ca8bf5] Pending
helpers_test.go:352: "task-pv-pod-restore" [935f4da3-7359-4e99-9670-966331ca8bf5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [935f4da3-7359-4e99-9670-966331ca8bf5] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003960494s
addons_test.go:614: (dbg) Run:  kubectl --context addons-178002 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-178002 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-178002 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (321.103228ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:14.821331  304185 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:14.822134  304185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:14.822183  304185 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:14.822208  304185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:14.822553  304185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:14.822953  304185 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:14.823399  304185 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:14.823447  304185 addons.go:606] checking whether the cluster is paused
	I1026 08:17:14.823582  304185 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:14.823619  304185 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:14.824108  304185 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:14.845004  304185 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:14.845113  304185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:14.863425  304185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:14.970894  304185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:14.970984  304185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:15.035169  304185 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:15.035218  304185 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:15.035227  304185 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:15.035232  304185 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:15.035236  304185 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:15.035242  304185 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:15.035246  304185 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:15.035249  304185 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:15.035254  304185 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:15.035263  304185 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:15.035267  304185 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:15.035271  304185 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:15.035275  304185 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:15.035279  304185 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:15.035282  304185 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:15.035312  304185 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:15.035317  304185 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:15.035323  304185 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:15.035333  304185 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:15.035338  304185 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:15.035343  304185 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:15.035346  304185 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:15.035350  304185 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:15.035353  304185 cri.go:89] found id: ""
	I1026 08:17:15.035422  304185 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:15.063255  304185 out.go:203] 
	W1026 08:17:15.068506  304185 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:15.068550  304185 out.go:285] * 
	* 
	W1026 08:17:15.080899  304185 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:15.085854  304185 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (287.723197ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:15.154828  304229 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:15.155831  304229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:15.155845  304229 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:15.155851  304229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:15.156307  304229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:15.156732  304229 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:15.157190  304229 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:15.157203  304229 addons.go:606] checking whether the cluster is paused
	I1026 08:17:15.158868  304229 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:15.158949  304229 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:15.164752  304229 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:15.184627  304229 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:15.184695  304229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:15.205870  304229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:15.317423  304229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:15.317515  304229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:15.347949  304229 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:15.347978  304229 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:15.347984  304229 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:15.347988  304229 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:15.347992  304229 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:15.347996  304229 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:15.348000  304229 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:15.348003  304229 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:15.348006  304229 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:15.348014  304229 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:15.348018  304229 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:15.348022  304229 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:15.348025  304229 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:15.348029  304229 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:15.348033  304229 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:15.348043  304229 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:15.348050  304229 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:15.348056  304229 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:15.348059  304229 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:15.348063  304229 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:15.348068  304229 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:15.348071  304229 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:15.348075  304229 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:15.348078  304229 cri.go:89] found id: ""
	I1026 08:17:15.348136  304229 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:15.363886  304229 out.go:203] 
	W1026 08:17:15.367024  304229 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:15.367053  304229 out.go:285] * 
	* 
	W1026 08:17:15.373390  304229 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:15.376498  304229 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-178002 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-178002 --alsologtostderr -v=1: exit status 11 (285.745835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:23.864177  302420 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:23.865406  302420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:23.865432  302420 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:23.865438  302420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:23.865749  302420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:23.866138  302420 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:23.866571  302420 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:23.866593  302420 addons.go:606] checking whether the cluster is paused
	I1026 08:16:23.866773  302420 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:23.866790  302420 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:23.867403  302420 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:23.887799  302420 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:23.887867  302420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:23.904541  302420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:24.010282  302420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:24.010389  302420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:24.050136  302420 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:24.050215  302420 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:24.050245  302420 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:24.050270  302420 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:24.050287  302420 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:24.050322  302420 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:24.050343  302420 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:24.050363  302420 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:24.050386  302420 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:24.050423  302420 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:24.050442  302420 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:24.050463  302420 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:24.050489  302420 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:24.050540  302420 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:24.050568  302420 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:24.050589  302420 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:24.050618  302420 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:24.050661  302420 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:24.050678  302420 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:24.050696  302420 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:24.050762  302420 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:24.050782  302420 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:24.050802  302420 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:24.050821  302420 cri.go:89] found id: ""
	I1026 08:16:24.050926  302420 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:24.066956  302420 out.go:203] 
	W1026 08:16:24.070048  302420 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:24.070075  302420 out.go:285] * 
	* 
	W1026 08:16:24.076464  302420 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:24.079402  302420 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-178002 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-178002
helpers_test.go:243: (dbg) docker inspect addons-178002:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d",
	        "Created": "2025-10-26T08:13:45.784640711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:13:45.860078529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/hosts",
	        "LogPath": "/var/lib/docker/containers/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d/b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d-json.log",
	        "Name": "/addons-178002",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-178002:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-178002",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b10aa919ba5d9b59e7bbf28ead60809cfe180b37e78710e58fbec95724c5876d",
	                "LowerDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb6bcc7d597ad4d177213f8498c8e2f19ea7ca5ecbf6af79a303ef76bef57180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-178002",
	                "Source": "/var/lib/docker/volumes/addons-178002/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-178002",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-178002",
	                "name.minikube.sigs.k8s.io": "addons-178002",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7556e840e06a3f4a8a4fa7532564ee2fc4edac05a65cf8074f12fffa2b7b8e77",
	            "SandboxKey": "/var/run/docker/netns/7556e840e06a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-178002": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:f0:75:08:9c:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f5b2696acfb438fa073a6590a62d488c86d7998d0b7b91c4da9e01aeed87153",
	                    "EndpointID": "93c81d2c3104b04872040ce9c170acb47786dd8730969cfa86f13f8ccfa90b72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-178002",
	                        "b10aa919ba5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-178002 -n addons-178002
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-178002 logs -n 25: (1.460949676s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-923578 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-923578   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ delete  │ -p download-only-923578                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-923578   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ start   │ -o=json --download-only -p download-only-431602 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-431602   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ delete  │ -p download-only-431602                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-431602   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ delete  │ -p download-only-923578                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-923578   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ delete  │ -p download-only-431602                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-431602   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ start   │ --download-only -p download-docker-436037 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-436037 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ -p download-docker-436037                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-436037 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ start   │ --download-only -p binary-mirror-991071 --alsologtostderr --binary-mirror http://127.0.0.1:45987 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-991071   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ -p binary-mirror-991071                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-991071   │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ addons  │ disable dashboard -p addons-178002                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-178002                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ start   │ -p addons-178002 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:16 UTC │
	│ addons  │ addons-178002 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ addons-178002 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-178002 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-178002          │ jenkins │ v1.37.0 │ 26 Oct 25 08:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:13:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:13:20.922404  296225 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:13:20.922581  296225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:20.922604  296225 out.go:374] Setting ErrFile to fd 2...
	I1026 08:13:20.922623  296225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:20.922960  296225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:13:20.923436  296225 out.go:368] Setting JSON to false
	I1026 08:13:20.924290  296225 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6951,"bootTime":1761459450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:13:20.924390  296225 start.go:141] virtualization:  
	I1026 08:13:20.935389  296225 out.go:179] * [addons-178002] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:13:20.946013  296225 notify.go:220] Checking for updates...
	I1026 08:13:20.965934  296225 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:13:20.996439  296225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:13:21.031225  296225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:13:21.062180  296225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:13:21.085424  296225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:13:21.119476  296225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:13:21.151206  296225 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:13:21.172127  296225 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:13:21.172271  296225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:21.234026  296225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 08:13:21.224352643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:21.234128  296225 docker.go:318] overlay module found
	I1026 08:13:21.270010  296225 out.go:179] * Using the docker driver based on user configuration
	I1026 08:13:21.302208  296225 start.go:305] selected driver: docker
	I1026 08:13:21.302238  296225 start.go:925] validating driver "docker" against <nil>
	I1026 08:13:21.302254  296225 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:13:21.303022  296225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:21.373137  296225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 08:13:21.363263084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:21.373298  296225 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:13:21.373522  296225 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:13:21.392997  296225 out.go:179] * Using Docker driver with root privileges
	I1026 08:13:21.427317  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:13:21.427413  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:13:21.427426  296225 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:13:21.427515  296225 start.go:349] cluster config:
	{Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 08:13:21.458155  296225 out.go:179] * Starting "addons-178002" primary control-plane node in "addons-178002" cluster
	I1026 08:13:21.489531  296225 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:13:21.523343  296225 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:13:21.563070  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:21.563071  296225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:13:21.563154  296225 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:13:21.563165  296225 cache.go:58] Caching tarball of preloaded images
	I1026 08:13:21.563243  296225 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:13:21.563253  296225 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:13:21.563637  296225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json ...
	I1026 08:13:21.563672  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json: {Name:mk9b0a2e0e4ccf16030eb426a52449eb315471fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:21.580461  296225 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 08:13:21.580630  296225 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 08:13:21.580656  296225 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 08:13:21.580665  296225 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 08:13:21.580673  296225 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 08:13:21.580679  296225 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 08:13:39.653167  296225 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 08:13:39.653200  296225 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:13:39.653231  296225 start.go:360] acquireMachinesLock for addons-178002: {Name:mke1fda8b123db5306a3ea50855b62b314240b5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:13:39.654130  296225 start.go:364] duration metric: took 875.289µs to acquireMachinesLock for "addons-178002"
	I1026 08:13:39.654177  296225 start.go:93] Provisioning new machine with config: &{Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:13:39.654267  296225 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:13:39.657584  296225 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 08:13:39.657826  296225 start.go:159] libmachine.API.Create for "addons-178002" (driver="docker")
	I1026 08:13:39.657877  296225 client.go:168] LocalClient.Create starting
	I1026 08:13:39.657994  296225 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 08:13:39.869081  296225 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 08:13:40.158499  296225 cli_runner.go:164] Run: docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:13:40.174065  296225 cli_runner.go:211] docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:13:40.174164  296225 network_create.go:284] running [docker network inspect addons-178002] to gather additional debugging logs...
	I1026 08:13:40.174185  296225 cli_runner.go:164] Run: docker network inspect addons-178002
	W1026 08:13:40.191633  296225 cli_runner.go:211] docker network inspect addons-178002 returned with exit code 1
	I1026 08:13:40.191664  296225 network_create.go:287] error running [docker network inspect addons-178002]: docker network inspect addons-178002: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-178002 not found
	I1026 08:13:40.191678  296225 network_create.go:289] output of [docker network inspect addons-178002]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-178002 not found
	
	** /stderr **
	I1026 08:13:40.191771  296225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:13:40.208792  296225 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7c80}
	I1026 08:13:40.208839  296225 network_create.go:124] attempt to create docker network addons-178002 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 08:13:40.208896  296225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-178002 addons-178002
	I1026 08:13:40.266159  296225 network_create.go:108] docker network addons-178002 192.168.49.0/24 created
	I1026 08:13:40.266188  296225 kic.go:121] calculated static IP "192.168.49.2" for the "addons-178002" container
	I1026 08:13:40.266261  296225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:13:40.280089  296225 cli_runner.go:164] Run: docker volume create addons-178002 --label name.minikube.sigs.k8s.io=addons-178002 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:13:40.302549  296225 oci.go:103] Successfully created a docker volume addons-178002
	I1026 08:13:40.302676  296225 cli_runner.go:164] Run: docker run --rm --name addons-178002-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-178002 --entrypoint /usr/bin/test -v addons-178002:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:13:41.263386  296225 oci.go:107] Successfully prepared a docker volume addons-178002
	I1026 08:13:41.263429  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:41.263449  296225 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:13:41.263513  296225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-178002:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:13:45.716244  296225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-178002:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.45267068s)
	I1026 08:13:45.716278  296225 kic.go:203] duration metric: took 4.452825291s to extract preloaded images to volume ...
	W1026 08:13:45.716441  296225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 08:13:45.716582  296225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:13:45.770130  296225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-178002 --name addons-178002 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-178002 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-178002 --network addons-178002 --ip 192.168.49.2 --volume addons-178002:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:13:46.073788  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Running}}
	I1026 08:13:46.092017  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.117792  296225 cli_runner.go:164] Run: docker exec addons-178002 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:13:46.169051  296225 oci.go:144] the created container "addons-178002" has a running status.
	I1026 08:13:46.169094  296225 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa...
	I1026 08:13:46.595046  296225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:13:46.614478  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.630245  296225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:13:46.630264  296225 kic_runner.go:114] Args: [docker exec --privileged addons-178002 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:13:46.669205  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:13:46.686343  296225 machine.go:93] provisionDockerMachine start ...
	I1026 08:13:46.686435  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:46.703700  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:46.704031  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:46.704047  296225 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:13:46.704618  296225 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50380->127.0.0.1:33140: read: connection reset by peer
	I1026 08:13:49.854297  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-178002
	
	I1026 08:13:49.854321  296225 ubuntu.go:182] provisioning hostname "addons-178002"
	I1026 08:13:49.854386  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:49.872042  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:49.872374  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:49.872392  296225 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-178002 && echo "addons-178002" | sudo tee /etc/hostname
	I1026 08:13:50.033346  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-178002
	
	I1026 08:13:50.033430  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.052103  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:50.052408  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:50.052432  296225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-178002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-178002/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-178002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:13:50.199402  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:13:50.199490  296225 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:13:50.199555  296225 ubuntu.go:190] setting up certificates
	I1026 08:13:50.199589  296225 provision.go:84] configureAuth start
	I1026 08:13:50.199686  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:50.215834  296225 provision.go:143] copyHostCerts
	I1026 08:13:50.215921  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:13:50.216083  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:13:50.216164  296225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:13:50.216226  296225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.addons-178002 san=[127.0.0.1 192.168.49.2 addons-178002 localhost minikube]
	I1026 08:13:50.435359  296225 provision.go:177] copyRemoteCerts
	I1026 08:13:50.435426  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:13:50.435468  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.453201  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:50.554780  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:13:50.572407  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:13:50.590296  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:13:50.608232  296225 provision.go:87] duration metric: took 408.614622ms to configureAuth
	I1026 08:13:50.608306  296225 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:13:50.608527  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:13:50.608644  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.626407  296225 main.go:141] libmachine: Using SSH client type: native
	I1026 08:13:50.626770  296225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1026 08:13:50.626791  296225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:13:50.883942  296225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:13:50.883966  296225 machine.go:96] duration metric: took 4.197598925s to provisionDockerMachine
	I1026 08:13:50.883986  296225 client.go:171] duration metric: took 11.226098649s to LocalClient.Create
	I1026 08:13:50.884000  296225 start.go:167] duration metric: took 11.226176319s to libmachine.API.Create "addons-178002"
	I1026 08:13:50.884008  296225 start.go:293] postStartSetup for "addons-178002" (driver="docker")
	I1026 08:13:50.884018  296225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:13:50.884094  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:13:50.884144  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:50.902250  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.006532  296225 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:13:51.015272  296225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:13:51.015306  296225 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:13:51.015319  296225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:13:51.015426  296225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:13:51.015459  296225 start.go:296] duration metric: took 131.4447ms for postStartSetup
	I1026 08:13:51.015809  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:51.033271  296225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/config.json ...
	I1026 08:13:51.033561  296225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:13:51.033611  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.050702  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.156336  296225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:13:51.161537  296225 start.go:128] duration metric: took 11.507252441s to createHost
	I1026 08:13:51.161577  296225 start.go:83] releasing machines lock for "addons-178002", held for 11.507411917s
	I1026 08:13:51.161698  296225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-178002
	I1026 08:13:51.179682  296225 ssh_runner.go:195] Run: cat /version.json
	I1026 08:13:51.179739  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.179771  296225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:13:51.179838  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:13:51.198635  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.200221  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:13:51.298421  296225 ssh_runner.go:195] Run: systemctl --version
	I1026 08:13:51.393097  296225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:13:51.433424  296225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:13:51.437544  296225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:13:51.437660  296225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:13:51.465536  296225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 08:13:51.465568  296225 start.go:495] detecting cgroup driver to use...
	I1026 08:13:51.465602  296225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:13:51.465651  296225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:13:51.482580  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:13:51.494669  296225 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:13:51.494903  296225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:13:51.512764  296225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:13:51.530963  296225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:13:51.648831  296225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:13:51.779518  296225 docker.go:234] disabling docker service ...
	I1026 08:13:51.779624  296225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:13:51.801654  296225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:13:51.814306  296225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:13:51.931086  296225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:13:52.049566  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:13:52.063057  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:13:52.081428  296225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:13:52.081501  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.091183  296225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:13:52.091258  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.100194  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.109190  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.118089  296225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:13:52.126685  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.135958  296225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.149275  296225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:13:52.157928  296225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:13:52.165130  296225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:13:52.172623  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:13:52.286773  296225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:13:52.411954  296225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:13:52.412064  296225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:13:52.416076  296225 start.go:563] Will wait 60s for crictl version
	I1026 08:13:52.416174  296225 ssh_runner.go:195] Run: which crictl
	I1026 08:13:52.419781  296225 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:13:52.447661  296225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:13:52.447795  296225 ssh_runner.go:195] Run: crio --version
	I1026 08:13:52.477049  296225 ssh_runner.go:195] Run: crio --version
	I1026 08:13:52.509012  296225 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:13:52.512070  296225 cli_runner.go:164] Run: docker network inspect addons-178002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:13:52.527592  296225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:13:52.531227  296225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:13:52.540607  296225 kubeadm.go:883] updating cluster {Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:13:52.540727  296225 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:13:52.540784  296225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:13:52.572277  296225 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:13:52.572302  296225 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:13:52.572364  296225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:13:52.601071  296225 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:13:52.601093  296225 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:13:52.601100  296225 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 08:13:52.601198  296225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-178002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:13:52.601281  296225 ssh_runner.go:195] Run: crio config
	I1026 08:13:52.671032  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:13:52.671052  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:13:52.671077  296225 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:13:52.671100  296225 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-178002 NodeName:addons-178002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:13:52.671234  296225 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-178002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:13:52.671317  296225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:13:52.678773  296225 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:13:52.678907  296225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:13:52.686288  296225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:13:52.698573  296225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:13:52.712113  296225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1026 08:13:52.724460  296225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:13:52.727945  296225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:13:52.737479  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:13:52.849416  296225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:13:52.864413  296225 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002 for IP: 192.168.49.2
	I1026 08:13:52.864474  296225 certs.go:195] generating shared ca certs ...
	I1026 08:13:52.864505  296225 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:52.864655  296225 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:13:53.283672  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt ...
	I1026 08:13:53.283705  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt: {Name:mk52185ba7eb3198f2aa31696853a84dc9f3f8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.284486  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key ...
	I1026 08:13:53.284506  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key: {Name:mkafbd07ac86bd46c3008360f658487f62084ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.285172  296225 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:13:53.726000  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt ...
	I1026 08:13:53.726035  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt: {Name:mka3c5574cf58a4e94452c3f4733046bf7166c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.726807  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key ...
	I1026 08:13:53.726825  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key: {Name:mkaeb2e1f0d31de3904b996c06503d5146d83c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:53.726913  296225 certs.go:257] generating profile certs ...
	I1026 08:13:53.726973  296225 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key
	I1026 08:13:53.726990  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt with IP's: []
	I1026 08:13:54.254950  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt ...
	I1026 08:13:54.254993  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: {Name:mk843ba9d2ee337ff36af75db50ad7a49e181329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.255180  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key ...
	I1026 08:13:54.255192  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.key: {Name:mkcb6018a2404aa7732c5e6fa2e629573b1667c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.255863  296225 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a
	I1026 08:13:54.255886  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 08:13:54.769867  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a ...
	I1026 08:13:54.769899  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a: {Name:mkba926599df4eb92c1da5cce3de24ed428d8993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.770693  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a ...
	I1026 08:13:54.770744  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a: {Name:mk1eb95ff6463545d43368ede656f0681d894143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:54.771412  296225 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt.3655ef7a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt
	I1026 08:13:54.771501  296225 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key.3655ef7a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key
	I1026 08:13:54.771557  296225 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key
	I1026 08:13:54.771582  296225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt with IP's: []
	I1026 08:13:55.354395  296225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt ...
	I1026 08:13:55.354426  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt: {Name:mk121ba1de94c2992d6b7dab04979c44c5e525e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:55.354625  296225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key ...
	I1026 08:13:55.354640  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key: {Name:mk6079091ca4029534e435284ffdb6f35be44b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:55.354858  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:13:55.354900  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:13:55.354924  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:13:55.354957  296225 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:13:55.355574  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:13:55.375981  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:13:55.396532  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:13:55.414835  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:13:55.433075  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 08:13:55.451449  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:13:55.470342  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:13:55.487953  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:13:55.505821  296225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:13:55.524024  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:13:55.537773  296225 ssh_runner.go:195] Run: openssl version
	I1026 08:13:55.544305  296225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:13:55.553249  296225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.557119  296225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.557186  296225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:13:55.598589  296225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:13:55.607437  296225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:13:55.611302  296225 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:13:55.611395  296225 kubeadm.go:400] StartCluster: {Name:addons-178002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-178002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:13:55.611494  296225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:13:55.611555  296225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:13:55.642211  296225 cri.go:89] found id: ""
	I1026 08:13:55.642292  296225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:13:55.650505  296225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:13:55.658243  296225 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:13:55.658339  296225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:13:55.666112  296225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:13:55.666132  296225 kubeadm.go:157] found existing configuration files:
	
	I1026 08:13:55.666186  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:13:55.673915  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:13:55.673984  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:13:55.681456  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:13:55.689392  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:13:55.689505  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:13:55.697275  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:13:55.705233  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:13:55.705299  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:13:55.712953  296225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:13:55.720435  296225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:13:55.720500  296225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:13:55.727747  296225 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:13:55.768223  296225 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:13:55.768452  296225 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:13:55.791765  296225 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:13:55.791935  296225 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 08:13:55.792020  296225 kubeadm.go:318] OS: Linux
	I1026 08:13:55.792115  296225 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:13:55.792183  296225 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 08:13:55.792246  296225 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:13:55.792304  296225 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:13:55.792377  296225 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:13:55.792471  296225 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:13:55.792526  296225 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:13:55.792581  296225 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:13:55.792654  296225 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 08:13:55.862807  296225 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:13:55.862922  296225 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:13:55.863023  296225 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:13:55.872144  296225 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 08:13:55.878923  296225 out.go:252]   - Generating certificates and keys ...
	I1026 08:13:55.879043  296225 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:13:55.879119  296225 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:13:56.173764  296225 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:13:56.681063  296225 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:13:57.194081  296225 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:13:57.649246  296225 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:13:58.005153  296225 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:13:58.005486  296225 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-178002 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 08:13:58.068174  296225 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:13:58.068544  296225 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-178002 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 08:13:58.660812  296225 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:13:59.906676  296225 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 08:14:01.234743  296225 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:14:01.235045  296225 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:14:01.931079  296225 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:14:02.129063  296225 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:14:02.405716  296225 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:14:03.639607  296225 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:14:04.139099  296225 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:14:04.140112  296225 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:14:04.144271  296225 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:14:04.147600  296225 out.go:252]   - Booting up control plane ...
	I1026 08:14:04.147713  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:14:04.147862  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:14:04.147934  296225 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:14:04.162921  296225 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:14:04.163315  296225 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:14:04.171206  296225 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:14:04.171573  296225 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:14:04.171623  296225 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:14:04.304403  296225 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:14:04.304532  296225 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:14:05.807107  296225 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500948639s
	I1026 08:14:05.807989  296225 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:14:05.808112  296225 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 08:14:05.808292  296225 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:14:05.808386  296225 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:14:10.017994  296225 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.208896716s
	I1026 08:14:10.282674  296225 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.474568889s
	I1026 08:14:12.311215  296225 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502714734s
	I1026 08:14:12.332613  296225 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:14:12.347761  296225 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:14:12.362562  296225 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:14:12.362810  296225 kubeadm.go:318] [mark-control-plane] Marking the node addons-178002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:14:12.375067  296225 kubeadm.go:318] [bootstrap-token] Using token: ipyrbz.o4znooj9wtawkvdk
	I1026 08:14:12.378095  296225 out.go:252]   - Configuring RBAC rules ...
	I1026 08:14:12.378246  296225 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:14:12.382167  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:14:12.390817  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:14:12.395041  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:14:12.401459  296225 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:14:12.405552  296225 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:14:12.718008  296225 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:14:13.168625  296225 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:14:13.718882  296225 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:14:13.719997  296225 kubeadm.go:318] 
	I1026 08:14:13.720076  296225 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:14:13.720087  296225 kubeadm.go:318] 
	I1026 08:14:13.720165  296225 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:14:13.720173  296225 kubeadm.go:318] 
	I1026 08:14:13.720199  296225 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:14:13.720262  296225 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:14:13.720319  296225 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:14:13.720328  296225 kubeadm.go:318] 
	I1026 08:14:13.720382  296225 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:14:13.720390  296225 kubeadm.go:318] 
	I1026 08:14:13.720438  296225 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:14:13.720453  296225 kubeadm.go:318] 
	I1026 08:14:13.720508  296225 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:14:13.720587  296225 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:14:13.720659  296225 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:14:13.720667  296225 kubeadm.go:318] 
	I1026 08:14:13.720752  296225 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:14:13.720831  296225 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:14:13.720840  296225 kubeadm.go:318] 
	I1026 08:14:13.720924  296225 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ipyrbz.o4znooj9wtawkvdk \
	I1026 08:14:13.721031  296225 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 08:14:13.721055  296225 kubeadm.go:318] 	--control-plane 
	I1026 08:14:13.721064  296225 kubeadm.go:318] 
	I1026 08:14:13.721149  296225 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:14:13.721157  296225 kubeadm.go:318] 
	I1026 08:14:13.721238  296225 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ipyrbz.o4znooj9wtawkvdk \
	I1026 08:14:13.721343  296225 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 08:14:13.724013  296225 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 08:14:13.724251  296225 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 08:14:13.724363  296225 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:14:13.724384  296225 cni.go:84] Creating CNI manager for ""
	I1026 08:14:13.724394  296225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:14:13.727579  296225 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:14:13.730568  296225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:14:13.734472  296225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:14:13.734533  296225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:14:13.747492  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:14:14.052061  296225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:14:14.052196  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:14.052277  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-178002 minikube.k8s.io/updated_at=2025_10_26T08_14_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=addons-178002 minikube.k8s.io/primary=true
	I1026 08:14:14.244747  296225 ops.go:34] apiserver oom_adj: -16
	I1026 08:14:14.244852  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:14.745202  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:15.245826  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:15.744985  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:16.244964  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:16.745002  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:17.244999  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:17.745774  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:18.244997  296225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:14:18.340066  296225 kubeadm.go:1113] duration metric: took 4.287920924s to wait for elevateKubeSystemPrivileges
	I1026 08:14:18.340105  296225 kubeadm.go:402] duration metric: took 22.728713047s to StartCluster
	I1026 08:14:18.340127  296225 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:14:18.340890  296225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:14:18.341270  296225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:14:18.341493  296225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:14:18.341644  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:14:18.341915  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:14:18.341935  296225 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 08:14:18.342036  296225 addons.go:69] Setting yakd=true in profile "addons-178002"
	I1026 08:14:18.342050  296225 addons.go:238] Setting addon yakd=true in "addons-178002"
	I1026 08:14:18.342067  296225 addons.go:69] Setting inspektor-gadget=true in profile "addons-178002"
	I1026 08:14:18.342077  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.342081  296225 addons.go:238] Setting addon inspektor-gadget=true in "addons-178002"
	I1026 08:14:18.342102  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.342599  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.342612  296225 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-178002"
	I1026 08:14:18.342625  296225 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-178002"
	I1026 08:14:18.342644  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.343046  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.343147  296225 addons.go:69] Setting metrics-server=true in profile "addons-178002"
	I1026 08:14:18.343164  296225 addons.go:238] Setting addon metrics-server=true in "addons-178002"
	I1026 08:14:18.343187  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.343606  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.346190  296225 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-178002"
	I1026 08:14:18.346231  296225 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-178002"
	I1026 08:14:18.346267  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.346783  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.346994  296225 addons.go:69] Setting cloud-spanner=true in profile "addons-178002"
	I1026 08:14:18.347053  296225 addons.go:238] Setting addon cloud-spanner=true in "addons-178002"
	I1026 08:14:18.347099  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.347539  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347731  296225 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-178002"
	I1026 08:14:18.362231  296225 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-178002"
	I1026 08:14:18.362266  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.362794  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347743  296225 addons.go:69] Setting default-storageclass=true in profile "addons-178002"
	I1026 08:14:18.379729  296225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-178002"
	I1026 08:14:18.380114  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347749  296225 addons.go:69] Setting gcp-auth=true in profile "addons-178002"
	I1026 08:14:18.382161  296225 mustload.go:65] Loading cluster: addons-178002
	I1026 08:14:18.382442  296225 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:14:18.402006  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347755  296225 addons.go:69] Setting ingress=true in profile "addons-178002"
	I1026 08:14:18.413909  296225 addons.go:238] Setting addon ingress=true in "addons-178002"
	I1026 08:14:18.413978  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.414505  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347760  296225 addons.go:69] Setting ingress-dns=true in profile "addons-178002"
	I1026 08:14:18.458849  296225 addons.go:238] Setting addon ingress-dns=true in "addons-178002"
	I1026 08:14:18.458932  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.459683  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.347806  296225 out.go:179] * Verifying Kubernetes components...
	I1026 08:14:18.348232  296225 addons.go:69] Setting volcano=true in profile "addons-178002"
	I1026 08:14:18.473215  296225 addons.go:238] Setting addon volcano=true in "addons-178002"
	I1026 08:14:18.473266  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.473729  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.473877  296225 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 08:14:18.348244  296225 addons.go:69] Setting registry=true in profile "addons-178002"
	I1026 08:14:18.484742  296225 addons.go:238] Setting addon registry=true in "addons-178002"
	I1026 08:14:18.484783  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.485251  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.492683  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 08:14:18.492760  296225 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 08:14:18.492865  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.348251  296225 addons.go:69] Setting registry-creds=true in profile "addons-178002"
	I1026 08:14:18.494260  296225 addons.go:238] Setting addon registry-creds=true in "addons-178002"
	I1026 08:14:18.494297  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.494802  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.348257  296225 addons.go:69] Setting storage-provisioner=true in profile "addons-178002"
	I1026 08:14:18.510842  296225 addons.go:238] Setting addon storage-provisioner=true in "addons-178002"
	I1026 08:14:18.510885  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.511376  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.511619  296225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:14:18.515660  296225 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 08:14:18.521621  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 08:14:18.521689  296225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 08:14:18.521782  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.348262  296225 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-178002"
	I1026 08:14:18.523608  296225 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-178002"
	I1026 08:14:18.523953  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.342601  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.348270  296225 addons.go:69] Setting volumesnapshots=true in profile "addons-178002"
	I1026 08:14:18.576743  296225 addons.go:238] Setting addon volumesnapshots=true in "addons-178002"
	I1026 08:14:18.576785  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.610870  296225 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 08:14:18.613753  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.648480  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 08:14:18.650469  296225 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 08:14:18.651018  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.651076  296225 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 08:14:18.652558  296225 addons.go:238] Setting addon default-storageclass=true in "addons-178002"
	I1026 08:14:18.660693  296225 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 08:14:18.660717  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 08:14:18.660785  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.663839  296225 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 08:14:18.665271  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 08:14:18.665338  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.667117  296225 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 08:14:18.667168  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 08:14:18.667256  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.687621  296225 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 08:14:18.664687  296225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:14:18.665254  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 08:14:18.690944  296225 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 08:14:18.691021  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 08:14:18.691124  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.696051  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 08:14:18.699030  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 08:14:18.709441  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.710469  296225 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 08:14:18.715267  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 08:14:18.715513  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.716019  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.734613  296225 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 08:14:18.739517  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1026 08:14:18.783731  296225 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 08:14:18.791384  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:18.791801  296225 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 08:14:18.791818  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 08:14:18.791892  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.805846  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 08:14:18.812508  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.816882  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 08:14:18.817095  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:18.823990  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 08:14:18.831134  296225 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 08:14:18.831182  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 08:14:18.831270  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.832760  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 08:14:18.832784  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 08:14:18.832863  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.865957  296225 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 08:14:18.866029  296225 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:14:18.872754  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 08:14:18.872785  296225 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 08:14:18.872852  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.873028  296225 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:14:18.873044  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:14:18.873086  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.895179  296225 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 08:14:18.900572  296225 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 08:14:18.900594  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 08:14:18.900668  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.903353  296225 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-178002"
	I1026 08:14:18.903390  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:18.903796  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:18.919317  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.920394  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.921055  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:18.922755  296225 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:14:18.922771  296225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:14:18.922829  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.924147  296225 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 08:14:18.927066  296225 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 08:14:18.927089  296225 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 08:14:18.927153  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:18.983216  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.013276  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.036749  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.069412  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.072832  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.086419  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.099454  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.110446  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.111373  296225 out.go:179]   - Using image docker.io/busybox:stable
	I1026 08:14:19.113007  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	W1026 08:14:19.115008  296225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 08:14:19.115041  296225 retry.go:31] will retry after 162.946604ms: ssh: handshake failed: EOF
	I1026 08:14:19.117712  296225 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 08:14:19.120774  296225 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 08:14:19.120797  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 08:14:19.120857  296225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:14:19.120863  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:19.156041  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:19.417470  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 08:14:19.417532  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 08:14:19.436470  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 08:14:19.436533  296225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 08:14:19.546371  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 08:14:19.619412  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 08:14:19.650494  296225 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 08:14:19.650528  296225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 08:14:19.653412  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:14:19.653686  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 08:14:19.653700  296225 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 08:14:19.667178  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 08:14:19.679892  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:14:19.691109  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 08:14:19.693841  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 08:14:19.693881  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 08:14:19.713358  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 08:14:19.734906  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 08:14:19.747771  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 08:14:19.758864  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 08:14:19.773369  296225 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 08:14:19.773412  296225 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 08:14:19.775742  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 08:14:19.775766  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 08:14:19.793650  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 08:14:19.793678  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 08:14:19.854698  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 08:14:19.854781  296225 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 08:14:19.932621  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 08:14:19.932643  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 08:14:19.971710  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 08:14:19.971741  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 08:14:19.972580  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 08:14:19.972597  296225 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 08:14:19.979613  296225 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 08:14:19.979638  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 08:14:20.001821  296225 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:20.001855  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 08:14:20.195597  296225 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 08:14:20.195623  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 08:14:20.214571  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 08:14:20.214615  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 08:14:20.232492  296225 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 08:14:20.232519  296225 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 08:14:20.316811  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 08:14:20.329861  296225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.639970968s)
	I1026 08:14:20.329886  296225 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 08:14:20.329832  296225 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2089502s)
	I1026 08:14:20.331481  296225 node_ready.go:35] waiting up to 6m0s for node "addons-178002" to be "Ready" ...
	I1026 08:14:20.344174  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:20.438075  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 08:14:20.438100  296225 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 08:14:20.504651  296225 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 08:14:20.504673  296225 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 08:14:20.529543  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 08:14:20.618064  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 08:14:20.618129  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 08:14:20.761501  296225 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:20.761567  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 08:14:20.835896  296225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-178002" context rescaled to 1 replicas
	I1026 08:14:20.980642  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 08:14:20.980721  296225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 08:14:21.033516  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:21.138044  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.591629581s)
	I1026 08:14:21.234903  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.615454557s)
	I1026 08:14:21.234995  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.581561204s)
	I1026 08:14:21.256374  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 08:14:21.256451  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 08:14:21.508294  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.841077547s)
	I1026 08:14:21.571319  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 08:14:21.571400  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 08:14:21.722807  296225 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 08:14:21.722941  296225 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 08:14:21.912228  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1026 08:14:22.375930  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:22.849765  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.158620181s)
	I1026 08:14:22.849871  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.169954197s)
	I1026 08:14:23.429460  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.716066123s)
	I1026 08:14:23.429679  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.694748887s)
	I1026 08:14:23.754777  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.006964678s)
	I1026 08:14:23.754812  296225 addons.go:479] Verifying addon metrics-server=true in "addons-178002"
	I1026 08:14:24.724048  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.965146881s)
	I1026 08:14:24.724082  296225 addons.go:479] Verifying addon ingress=true in "addons-178002"
	I1026 08:14:24.724336  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.407495397s)
	I1026 08:14:24.724450  296225 addons.go:479] Verifying addon registry=true in "addons-178002"
	I1026 08:14:24.724661  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.380457101s)
	W1026 08:14:24.724690  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:24.724706  296225 retry.go:31] will retry after 370.180227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:24.724757  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.195146622s)
	I1026 08:14:24.725063  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.691465899s)
	W1026 08:14:24.725393  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 08:14:24.725412  296225 retry.go:31] will retry after 341.188253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 08:14:24.727569  296225 out.go:179] * Verifying ingress addon...
	I1026 08:14:24.729743  296225 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-178002 service yakd-dashboard -n yakd-dashboard
	
	I1026 08:14:24.729850  296225 out.go:179] * Verifying registry addon...
	I1026 08:14:24.733635  296225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 08:14:24.734622  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 08:14:24.739062  296225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 08:14:24.739083  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:24.742494  296225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 08:14:24.742514  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:24.838601  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:25.018051  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.10572757s)
	I1026 08:14:25.018142  296225 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-178002"
	I1026 08:14:25.021547  296225 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 08:14:25.025343  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 08:14:25.029577  296225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 08:14:25.029598  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:25.066920  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 08:14:25.095482  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:25.238097  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:25.239144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:25.530403  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:25.739198  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:25.739315  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.028815  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:26.237386  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:26.237841  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.336273  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 08:14:26.336395  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:26.353827  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:26.471924  296225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 08:14:26.484372  296225 addons.go:238] Setting addon gcp-auth=true in "addons-178002"
	I1026 08:14:26.484418  296225 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:14:26.484856  296225 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:14:26.501901  296225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 08:14:26.501965  296225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:14:26.519450  296225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:14:26.529760  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:26.736785  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:26.737995  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:27.028759  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:27.237043  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:27.237409  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:27.335418  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:27.532185  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:27.739930  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:27.740522  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:27.829914  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.762940686s)
	I1026 08:14:27.830060  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.734504608s)
	W1026 08:14:27.830087  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:27.830111  296225 retry.go:31] will retry after 332.647592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:27.830171  296225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.328241877s)
	I1026 08:14:27.833471  296225 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 08:14:27.836568  296225 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 08:14:27.839361  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 08:14:27.839399  296225 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 08:14:27.852859  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 08:14:27.852883  296225 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 08:14:27.865975  296225 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 08:14:27.865999  296225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 08:14:27.879261  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 08:14:28.029073  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:28.163435  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:28.240144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:28.240453  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:28.468420  296225 addons.go:479] Verifying addon gcp-auth=true in "addons-178002"
	I1026 08:14:28.471553  296225 out.go:179] * Verifying gcp-auth addon...
	I1026 08:14:28.474514  296225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 08:14:28.480773  296225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 08:14:28.480839  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:28.529495  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:28.736923  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:28.738492  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:28.978184  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:29.029696  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:29.111324  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:29.111360  296225 retry.go:31] will retry after 798.198303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:29.237883  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:29.239200  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:29.478070  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:29.529080  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:29.737174  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:29.738159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:29.835250  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:29.910600  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:29.977695  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:30.030016  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:30.239329  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:30.240585  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:30.477981  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:30.528786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:30.738001  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:30.738786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:30.755784  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:30.755816  296225 retry.go:31] will retry after 1.020794769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:30.978027  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:31.029284  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:31.237999  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:31.238406  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:31.478427  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:31.528840  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:31.738617  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:31.739196  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:31.777327  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 08:14:31.835429  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:31.979995  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:32.031230  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:32.239361  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:32.239924  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:32.478748  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:32.529547  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:32.669558  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:32.669613  296225 retry.go:31] will retry after 1.109813752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:32.737241  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:32.737425  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:32.978894  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:33.029400  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:33.237873  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:33.238023  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:33.478021  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:33.529109  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:33.738427  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:33.738609  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:33.780495  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:33.978665  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:34.029511  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:34.238630  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:34.239621  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:34.335167  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:34.478755  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:34.529964  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 08:14:34.655609  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:34.655690  296225 retry.go:31] will retry after 1.76886346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:34.737404  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:34.738126  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:34.978675  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:35.029255  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:35.238445  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:35.238624  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:35.477710  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:35.529048  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:35.738356  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:35.738575  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:35.978608  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:36.029248  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:36.238211  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:36.238675  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:36.335824  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:36.425079  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:36.478406  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:36.528765  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:36.739054  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:36.739945  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:36.978047  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:37.030210  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:37.239621  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:37.240035  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:37.326293  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:37.326385  296225 retry.go:31] will retry after 4.147131696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:37.477647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:37.528893  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:37.738096  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:37.738996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:37.978115  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:38.030133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:38.237935  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:38.238038  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:38.478106  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:38.529576  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:38.737519  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:38.738396  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:38.835281  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:38.978686  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:39.029084  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:39.237194  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:39.237690  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:39.478026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:39.529501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:39.736648  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:39.737781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:39.977742  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:40.030501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:40.238463  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:40.238789  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:40.478201  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:40.529067  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:40.737120  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:40.737581  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:40.978508  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:41.029496  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:41.237606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:41.237779  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:41.334821  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:41.474236  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:41.478194  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:41.529640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:41.738257  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:41.738685  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:41.978346  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:42.035550  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:42.240339  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:42.240679  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:42.335734  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:42.335770  296225 retry.go:31] will retry after 4.780772563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:42.478781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:42.528798  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:42.737304  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:42.737835  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:42.977404  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:43.028435  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:43.236703  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:43.237210  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:43.337094  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:43.478461  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:43.529385  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:43.737615  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:43.737783  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:43.980153  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:44.029218  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:44.238857  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:44.239015  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:44.477752  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:44.528542  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:44.736761  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:44.737885  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:44.977547  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:45.033623  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:45.238873  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:45.239980  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:45.343562  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:45.477604  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:45.528784  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:45.737359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:45.737503  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:45.977676  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:46.028757  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:46.237240  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:46.237850  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:46.477849  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:46.528716  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:46.736798  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:46.737670  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:46.977362  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:47.028532  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:47.117676  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:47.251251  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:47.252439  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:47.477980  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:47.528953  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:47.738466  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:47.738594  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:47.834839  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	W1026 08:14:47.958087  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:47.958120  296225 retry.go:31] will retry after 3.448056655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:47.977982  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:48.028983  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:48.238251  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:48.238374  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:48.478384  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:48.529384  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:48.736910  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:48.737433  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:48.977645  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:49.028362  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:49.237698  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:49.238135  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:49.477793  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:49.528925  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:49.737269  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:49.737765  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:49.834972  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:49.977894  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:50.029119  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:50.237191  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:50.237731  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:50.477359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:50.529263  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:50.740190  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:50.740578  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:50.977374  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:51.028975  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:51.237376  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:51.237798  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:51.407086  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:51.478236  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:51.529623  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:51.738444  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:51.738574  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:51.977906  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:52.029150  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:52.239302  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:52.239430  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:52.243973  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:52.244003  296225 retry.go:31] will retry after 5.911071565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:14:52.334841  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:52.477781  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:52.528801  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:52.737710  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:52.738016  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:52.978339  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:53.028224  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:53.237766  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:53.237816  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:53.477626  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:53.528731  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:53.737175  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:53.737471  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:53.978463  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:54.029028  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:54.237825  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:54.238061  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 08:14:54.335623  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:54.477555  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:54.528359  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:54.738061  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:54.738172  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:54.977657  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:55.029002  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:55.237094  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:55.238162  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:55.477872  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:55.528861  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:55.737213  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:55.737420  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:55.977804  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:56.028662  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:56.237487  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:56.237645  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:56.477965  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:56.529077  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:56.737120  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:56.737454  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:56.835172  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	I1026 08:14:56.978032  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:57.028661  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:57.237682  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:57.237820  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:57.477451  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:57.529424  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:57.736950  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:57.737278  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:57.977953  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:58.029148  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:58.155294  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:14:58.239576  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:58.239941  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:58.478422  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:58.528923  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:58.737434  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:58.738069  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 08:14:58.835387  296225 node_ready.go:57] node "addons-178002" has "Ready":"False" status (will retry)
	W1026 08:14:58.964117  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:58.964157  296225 retry.go:31] will retry after 7.777231146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:14:58.977919  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:59.028851  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:59.237415  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:59.238151  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:59.499041  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:14:59.560133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:14:59.740966  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:14:59.741353  296225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 08:14:59.741375  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:14:59.908671  296225 node_ready.go:49] node "addons-178002" is "Ready"
	I1026 08:14:59.908706  296225 node_ready.go:38] duration metric: took 39.577192311s for node "addons-178002" to be "Ready" ...
	I1026 08:14:59.908721  296225 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:14:59.908797  296225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:14:59.930818  296225 api_server.go:72] duration metric: took 41.589286991s to wait for apiserver process to appear ...
	I1026 08:14:59.930847  296225 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:14:59.930879  296225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:14:59.962270  296225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:14:59.964353  296225 api_server.go:141] control plane version: v1.34.1
	I1026 08:14:59.964381  296225 api_server.go:131] duration metric: took 33.515648ms to wait for apiserver health ...
	I1026 08:14:59.964390  296225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:14:59.997323  296225 system_pods.go:59] 19 kube-system pods found
	I1026 08:14:59.997428  296225 system_pods.go:61] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:14:59.997453  296225 system_pods.go:61] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending
	I1026 08:14:59.997504  296225 system_pods.go:61] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:14:59.997535  296225 system_pods.go:61] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:14:59.997558  296225 system_pods.go:61] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:14:59.997584  296225 system_pods.go:61] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:14:59.997614  296225 system_pods.go:61] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:14:59.997640  296225 system_pods.go:61] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:14:59.997671  296225 system_pods.go:61] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending
	I1026 08:14:59.997723  296225 system_pods.go:61] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:14:59.997832  296225 system_pods.go:61] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:14:59.997866  296225 system_pods.go:61] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:14:59.997891  296225 system_pods.go:61] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:14:59.997920  296225 system_pods.go:61] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:14:59.997950  296225 system_pods.go:61] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:14:59.998065  296225 system_pods.go:61] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:14:59.998102  296225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:14:59.998123  296225 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending
	I1026 08:14:59.998151  296225 system_pods.go:61] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:14:59.998223  296225 system_pods.go:74] duration metric: took 33.817074ms to wait for pod list to return data ...
	I1026 08:14:59.998896  296225 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:14:59.998584  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:00.009297  296225 default_sa.go:45] found service account: "default"
	I1026 08:15:00.009395  296225 default_sa.go:55] duration metric: took 10.482339ms for default service account to be created ...
	I1026 08:15:00.009427  296225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:15:00.023374  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.023496  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.023522  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending
	I1026 08:15:00.023564  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.023595  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.023619  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.023643  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.023680  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.023709  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.023733  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending
	I1026 08:15:00.023754  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.023793  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.023824  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.023847  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:15:00.023869  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:15:00.023903  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:15:00.023935  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.023961  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.023986  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending
	I1026 08:15:00.024022  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.024067  296225 retry.go:31] will retry after 235.520543ms: missing components: kube-dns
	I1026 08:15:00.047537  296225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 08:15:00.047640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:00.284763  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:00.294231  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:00.301046  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.301148  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.301175  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:00.301218  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.301257  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.301279  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.301303  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.301336  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.301369  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.301398  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:00.301420  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.303763  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.303845  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.303868  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending
	I1026 08:15:00.303894  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending
	I1026 08:15:00.303925  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending
	I1026 08:15:00.303953  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.303979  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.304004  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.304050  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.304096  296225 retry.go:31] will retry after 357.430229ms: missing components: kube-dns
	I1026 08:15:00.498940  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:00.532605  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:00.682494  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:00.682599  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:00.682626  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:00.682670  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:00.682699  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:00.682783  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:00.682813  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:00.682840  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:00.682878  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:00.682906  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:00.682935  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:00.682969  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:00.682999  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:00.683024  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:00.683064  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:00.683093  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:00.683118  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:00.683154  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.683186  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:00.683214  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:00.683270  296225 retry.go:31] will retry after 414.669984ms: missing components: kube-dns
	I1026 08:15:00.749213  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:00.749311  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:00.979671  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:01.094151  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:01.118002  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:01.118100  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:15:01.118131  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:01.118172  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:01.118221  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:01.118259  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:01.118287  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:01.118311  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:01.118351  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:01.118380  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:01.118400  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:01.118442  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:01.118474  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:01.118519  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:01.118549  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:01.118572  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:01.118615  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:01.118646  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.118671  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.118737  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:15:01.118773  296225 retry.go:31] will retry after 529.848998ms: missing components: kube-dns
	I1026 08:15:01.239240  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:01.239681  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:01.490478  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:01.562652  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:01.655406  296225 system_pods.go:86] 19 kube-system pods found
	I1026 08:15:01.655445  296225 system_pods.go:89] "coredns-66bc5c9577-hbh8d" [e2860e7b-86ef-4394-aded-7b84c5fecde7] Running
	I1026 08:15:01.655456  296225 system_pods.go:89] "csi-hostpath-attacher-0" [ef65acbd-78e2-4703-a5fb-4515e2f09abd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 08:15:01.655463  296225 system_pods.go:89] "csi-hostpath-resizer-0" [18ca1f59-c618-4124-bf7c-c02bf049e5b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 08:15:01.655471  296225 system_pods.go:89] "csi-hostpathplugin-zbhlb" [879cdb1d-5607-497d-b3ee-6966fb1162c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 08:15:01.655476  296225 system_pods.go:89] "etcd-addons-178002" [5b8467b4-a37e-4ced-a58d-a280c1212e56] Running
	I1026 08:15:01.655481  296225 system_pods.go:89] "kindnet-bmsbv" [b5737cdb-b0b4-4aed-9ef1-5d08a55cd47a] Running
	I1026 08:15:01.655485  296225 system_pods.go:89] "kube-apiserver-addons-178002" [8471f79b-e092-4fbd-8ed7-e3746321da15] Running
	I1026 08:15:01.655490  296225 system_pods.go:89] "kube-controller-manager-addons-178002" [32921993-d628-4904-a2c8-696d4ed9c1a5] Running
	I1026 08:15:01.655502  296225 system_pods.go:89] "kube-ingress-dns-minikube" [d23bebc2-605a-40b8-afa3-c5ac194aa327] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 08:15:01.655506  296225 system_pods.go:89] "kube-proxy-s87tq" [547cf934-2c45-4a00-9c40-9534233d8639] Running
	I1026 08:15:01.655514  296225 system_pods.go:89] "kube-scheduler-addons-178002" [3de082ce-d843-4dba-ac53-16026cfc4176] Running
	I1026 08:15:01.655521  296225 system_pods.go:89] "metrics-server-85b7d694d7-bgt5w" [6e86d9d0-7758-431d-9fde-6370759a5d9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 08:15:01.655534  296225 system_pods.go:89] "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 08:15:01.655541  296225 system_pods.go:89] "registry-6b586f9694-t9spk" [7cd368e5-f221-4376-9edb-ba2a92bcbdd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 08:15:01.655552  296225 system_pods.go:89] "registry-creds-764b6fb674-c4cvz" [d49d499e-2d32-44e8-8b7d-61e797375c41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 08:15:01.655558  296225 system_pods.go:89] "registry-proxy-n9gsn" [b97f658c-f9d8-4663-be7b-157fe4c0d096] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 08:15:01.655572  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ppj6" [9c4b58fb-7503-4d14-8576-143e6fbdd899] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.655579  296225 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xk7" [ba033417-073e-41a6-bd34-535b06a96bd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 08:15:01.655583  296225 system_pods.go:89] "storage-provisioner" [69cbd60d-a97c-41c5-a1dd-c61112aca273] Running
	I1026 08:15:01.655593  296225 system_pods.go:126] duration metric: took 1.646131177s to wait for k8s-apps to be running ...
	I1026 08:15:01.655607  296225 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:15:01.655669  296225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:15:01.677560  296225 system_svc.go:56] duration metric: took 21.942206ms WaitForService to wait for kubelet
	I1026 08:15:01.677588  296225 kubeadm.go:586] duration metric: took 43.336062516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:15:01.677607  296225 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:15:01.681937  296225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:15:01.681970  296225 node_conditions.go:123] node cpu capacity is 2
	I1026 08:15:01.681987  296225 node_conditions.go:105] duration metric: took 4.372283ms to run NodePressure ...
	I1026 08:15:01.682000  296225 start.go:241] waiting for startup goroutines ...
	I1026 08:15:01.738886  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:01.739068  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:01.981768  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:02.030030  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:02.239010  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:02.239816  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:02.478155  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:02.529769  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:02.739461  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:02.739674  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:02.978060  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:03.029837  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:03.240058  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:03.240268  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:03.479059  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:03.580223  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:03.738940  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:03.739257  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:03.978381  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:04.028704  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:04.238528  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:04.238705  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:04.477510  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:04.529111  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:04.739741  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:04.739906  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:04.978951  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:05.029594  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:05.241795  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:05.241987  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:05.477900  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:05.529352  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:05.738487  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:05.739039  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:05.977488  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:06.028977  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:06.238144  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:06.239366  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:06.477344  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:06.528418  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:06.736565  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:06.738790  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:06.742045  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:15:06.977341  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:07.028814  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:07.246938  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:07.247208  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:07.478299  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:07.530021  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:07.739736  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:07.740458  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:07.807486  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065408891s)
	W1026 08:15:07.807519  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:07.807537  296225 retry.go:31] will retry after 18.446026225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:07.977536  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:08.029434  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:08.238521  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:08.239501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:08.478073  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:08.529996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:08.740095  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:08.740416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:08.977745  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:09.030513  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:09.238554  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:09.239805  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:09.477984  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:09.529900  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:09.736702  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:09.738704  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:09.977638  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:10.028964  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:10.236927  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:10.238903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:10.477613  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:10.528652  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:10.739659  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:10.741388  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:10.977693  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:11.030753  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:11.241218  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:11.247845  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:11.478571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:11.529499  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:11.739736  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:11.740168  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:11.978955  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:12.029052  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:12.241592  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:12.245300  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:12.478560  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:12.539694  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:12.738732  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:12.738907  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:12.978432  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:13.029589  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:13.239035  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:13.239284  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:13.478692  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:13.581127  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:13.737375  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:13.737971  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:13.978508  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:14.028997  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:14.238140  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:14.239347  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:14.477518  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:14.529302  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:14.740226  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:14.740652  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:14.978026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:15.084348  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:15.238285  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:15.238667  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:15.477787  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:15.579233  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:15.740241  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:15.740807  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:15.978328  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:16.028497  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:16.236652  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:16.239092  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:16.482132  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:16.529510  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:16.737057  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:16.737473  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:16.977689  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:17.029247  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:17.237753  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:17.238344  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:17.479206  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:17.529928  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:17.739053  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:17.739557  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:17.977552  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:18.030354  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:18.240778  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:18.242979  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:18.478459  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:18.528786  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:18.739016  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:18.739063  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:18.977806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:19.029133  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:19.236953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:19.239017  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:19.478866  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:19.528873  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:19.737336  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:19.737461  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:19.977754  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:20.030268  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:20.239874  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:20.245584  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:20.477884  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:20.531042  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:20.749220  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:20.749205  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:20.977640  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:21.029806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:21.239440  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:21.239846  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:21.478919  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:21.580106  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:21.739822  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:21.740155  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:21.991418  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:22.092939  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:22.241392  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:22.241999  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:22.478530  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:22.528460  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:22.751813  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:22.752040  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:22.978454  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:23.029481  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:23.238299  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:23.238513  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:23.478074  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:23.529818  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:23.741256  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:23.741599  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:23.979924  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:24.029255  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:24.239008  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:24.239352  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:24.478355  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:24.528811  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:24.737242  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:24.739670  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:24.978998  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:25.030333  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:25.239795  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:25.239931  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:25.478424  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:25.528840  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:25.737953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:25.738862  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:25.978323  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:26.029709  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:26.236884  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:26.238368  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:26.254669  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:15:26.478501  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:26.529422  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:26.738309  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:26.738928  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:26.978231  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:27.031663  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:27.239026  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:27.240294  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:27.373066  296225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.118353536s)
	W1026 08:15:27.373107  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:27.373128  296225 retry.go:31] will retry after 42.725156939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:15:27.480356  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:27.528572  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:27.745519  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:27.746019  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:27.979363  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:28.029212  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:28.239500  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:28.240098  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:28.478186  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:28.529960  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:28.739200  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:28.739647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:28.978351  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:29.082888  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:29.239246  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:29.239769  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:29.478242  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:29.529520  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:29.743575  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:29.743862  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:29.978489  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:30.089803  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:30.238007  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:30.241514  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:30.477821  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:30.529159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:30.738984  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:30.739120  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:30.978083  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:31.029228  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:31.238656  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:31.238887  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:31.478903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:31.530158  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:31.741354  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:31.741518  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:31.977732  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:32.029407  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:32.239608  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:32.239750  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:32.480310  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:32.529240  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:32.737713  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:32.738155  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:32.978573  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:33.028733  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:33.237917  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:33.238402  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:33.479167  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:33.530158  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:33.737454  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:33.738573  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:33.978278  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:34.030057  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:34.239320  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:34.239640  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:34.477911  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:34.529572  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:34.739189  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:34.739900  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:34.978301  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:35.030535  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:35.238969  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:35.240235  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:35.479115  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:35.580164  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:35.738527  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:35.740351  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:35.977867  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:36.029155  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:36.238302  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:36.239667  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:36.477903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:36.529534  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:36.739331  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:36.740308  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:36.978908  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:37.031169  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:37.239073  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:37.240230  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:37.478802  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:37.529719  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:37.739390  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:37.740393  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:37.978469  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:38.029767  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:38.238026  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:38.238460  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:38.477530  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:38.529381  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:38.739023  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:38.739512  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:38.977996  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:39.029904  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:39.238647  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:39.238863  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:39.478470  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:39.529453  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:39.738904  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:39.739322  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:39.978826  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:40.037382  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:40.239813  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:40.239963  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:40.478472  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:40.528541  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:40.736352  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:40.738331  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:40.977471  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:41.028469  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:41.236734  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:41.239221  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:41.478839  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:41.529958  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:41.738995  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:41.739343  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:41.981562  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:42.034397  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:42.241726  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:42.242278  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:42.478312  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:42.532118  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:42.739161  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:42.739527  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:42.980332  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:43.030903  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:43.240494  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:43.240825  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:43.478332  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:43.529926  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:43.737641  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:43.738181  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:43.978159  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:44.030139  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:44.236944  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:44.239059  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:44.478442  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:44.531043  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:44.739417  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:44.739554  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:44.977650  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:45.048679  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:45.240688  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:45.240752  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:45.478325  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:45.528609  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:45.737957  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:45.738136  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:45.978681  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:46.028977  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:46.238105  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:46.238954  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:46.477981  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:46.529375  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:46.739376  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:46.740368  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:46.978687  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:47.029584  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:47.238019  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:47.238843  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:47.477925  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:47.529323  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:47.742434  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:47.742608  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:47.977843  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:48.033250  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:48.237838  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:48.237956  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:48.478492  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:48.529399  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:48.741188  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 08:15:48.741383  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:48.977968  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:49.030444  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:49.239290  296225 kapi.go:107] duration metric: took 1m24.504662711s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 08:15:49.239692  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:49.478340  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:49.529337  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:49.738705  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:49.978206  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:50.030222  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:50.238262  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:50.478610  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:50.529017  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:50.738217  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:50.994806  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:51.032938  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:51.240338  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:51.477970  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:51.530471  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:51.737343  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:51.982965  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:52.030162  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:52.237407  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:52.477791  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:52.528830  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:52.738339  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:52.979635  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:53.029539  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:53.238402  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:53.478683  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:53.531118  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:53.741117  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:53.980074  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:54.030207  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:54.238096  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:54.478571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:54.528571  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:54.741273  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:54.978684  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:55.030119  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:55.239146  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:55.479491  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:55.529142  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:55.740663  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:55.977960  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:56.029367  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:56.236953  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:56.477729  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:56.529188  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:56.737379  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:56.978566  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:57.029258  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:57.237450  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:57.477606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:57.529496  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:57.738310  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:57.978759  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:58.029633  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:58.236914  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:58.477809  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:58.529218  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:58.763626  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:58.978365  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:59.028546  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:59.237339  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:59.477687  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:15:59.529451  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:15:59.743372  296225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 08:15:59.984010  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:00.128558  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:00.242110  296225 kapi.go:107] duration metric: took 1m35.50846925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 08:16:00.480410  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:00.530349  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:00.977490  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:01.030072  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:01.477624  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:01.529509  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:01.978001  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:02.029935  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:02.478416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:02.528483  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:02.979617  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:03.030041  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:03.478674  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:03.529578  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:03.978606  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:04.029768  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:04.478464  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:04.529835  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:04.978692  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:05.029455  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:05.477913  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:05.529578  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:05.978412  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:06.081661  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:06.477499  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:06.529088  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:06.978066  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:07.029273  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:07.477585  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:07.529028  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:07.978272  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 08:16:08.038879  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:08.479159  296225 kapi.go:107] duration metric: took 1m40.004640372s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 08:16:08.482596  296225 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-178002 cluster.
	I1026 08:16:08.485636  296225 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 08:16:08.488522  296225 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 08:16:08.530127  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:09.029775  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:09.530416  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:10.030707  296225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 08:16:10.099063  296225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 08:16:10.529927  296225 kapi.go:107] duration metric: took 1m45.504581847s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W1026 08:16:10.944054  296225 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:16:10.944156  296225 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 08:16:10.947965  296225 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, default-storageclass, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 08:16:10.951223  296225 addons.go:514] duration metric: took 1m52.609271345s for enable addons: enabled=[nvidia-device-plugin registry-creds default-storageclass amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 08:16:10.951288  296225 start.go:246] waiting for cluster config update ...
	I1026 08:16:10.951315  296225 start.go:255] writing updated cluster config ...
	I1026 08:16:10.951623  296225 ssh_runner.go:195] Run: rm -f paused
	I1026 08:16:10.958578  296225 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:16:10.971159  296225 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hbh8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:10.994934  296225 pod_ready.go:94] pod "coredns-66bc5c9577-hbh8d" is "Ready"
	I1026 08:16:10.995006  296225 pod_ready.go:86] duration metric: took 23.815542ms for pod "coredns-66bc5c9577-hbh8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:10.998128  296225 pod_ready.go:83] waiting for pod "etcd-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.009222  296225 pod_ready.go:94] pod "etcd-addons-178002" is "Ready"
	I1026 08:16:11.009260  296225 pod_ready.go:86] duration metric: took 11.105166ms for pod "etcd-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.099128  296225 pod_ready.go:83] waiting for pod "kube-apiserver-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.104813  296225 pod_ready.go:94] pod "kube-apiserver-addons-178002" is "Ready"
	I1026 08:16:11.104842  296225 pod_ready.go:86] duration metric: took 5.682497ms for pod "kube-apiserver-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.107608  296225 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.363419  296225 pod_ready.go:94] pod "kube-controller-manager-addons-178002" is "Ready"
	I1026 08:16:11.363450  296225 pod_ready.go:86] duration metric: took 255.813417ms for pod "kube-controller-manager-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.562828  296225 pod_ready.go:83] waiting for pod "kube-proxy-s87tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:11.962952  296225 pod_ready.go:94] pod "kube-proxy-s87tq" is "Ready"
	I1026 08:16:11.963034  296225 pod_ready.go:86] duration metric: took 400.176991ms for pod "kube-proxy-s87tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.164606  296225 pod_ready.go:83] waiting for pod "kube-scheduler-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.563214  296225 pod_ready.go:94] pod "kube-scheduler-addons-178002" is "Ready"
	I1026 08:16:12.563241  296225 pod_ready.go:86] duration metric: took 398.607264ms for pod "kube-scheduler-addons-178002" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:16:12.563255  296225 pod_ready.go:40] duration metric: took 1.604603548s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:16:12.618064  296225 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 08:16:12.621306  296225 out.go:179] * Done! kubectl is now configured to use "addons-178002" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 08:16:09 addons-178002 crio[832]: time="2025-10-26T08:16:09.71602729Z" level=info msg="Created container 656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0: kube-system/csi-hostpathplugin-zbhlb/csi-snapshotter" id=fe788b9f-c4b3-48d8-80e6-25c7fd2eed05 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:16:09 addons-178002 crio[832]: time="2025-10-26T08:16:09.716791376Z" level=info msg="Starting container: 656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0" id=fc5f833d-0ab3-4ceb-b117-9eb47555db99 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:16:09 addons-178002 crio[832]: time="2025-10-26T08:16:09.721388552Z" level=info msg="Started container" PID=4952 containerID=656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0 description=kube-system/csi-hostpathplugin-zbhlb/csi-snapshotter id=fc5f833d-0ab3-4ceb-b117-9eb47555db99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3fde5e44b9cd54bce4d39242f4ff1a3f70c15cdd2cd6818f5c199e6450bd8dc
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.139697132Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2e157c1f-f387-4669-bb1d-b3c64185cea3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.139775385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.146576334Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4118ae881c435dda09dba39e973b4f89f324581366756790963595213228a932 UID:73b86850-50e4-406d-ba5a-cbf3c70b1a29 NetNS:/var/run/netns/83ee6362-b64c-4613-94f8-8cea56ef84a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c808}] Aliases:map[]}"
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.146615957Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.161468683Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4118ae881c435dda09dba39e973b4f89f324581366756790963595213228a932 UID:73b86850-50e4-406d-ba5a-cbf3c70b1a29 NetNS:/var/run/netns/83ee6362-b64c-4613-94f8-8cea56ef84a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c808}] Aliases:map[]}"
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.161626239Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.165308944Z" level=info msg="Ran pod sandbox 4118ae881c435dda09dba39e973b4f89f324581366756790963595213228a932 with infra container: default/busybox/POD" id=2e157c1f-f387-4669-bb1d-b3c64185cea3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.166462803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e622c7c4-e9f0-4058-a121-30d3a274e21e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.166583797Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e622c7c4-e9f0-4058-a121-30d3a274e21e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.166620983Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e622c7c4-e9f0-4058-a121-30d3a274e21e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.172200028Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3c92c2ba-b955-451a-91bb-6a620a85bbaf name=/runtime.v1.ImageService/PullImage
	Oct 26 08:16:14 addons-178002 crio[832]: time="2025-10-26T08:16:14.174533361Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.169725127Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3c92c2ba-b955-451a-91bb-6a620a85bbaf name=/runtime.v1.ImageService/PullImage
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.170296801Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=13b48d63-2a7a-4231-91c7-0dc9b5907370 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.172335608Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b1e55aa-f02b-464a-9388-e89f0fa096a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.178841719Z" level=info msg="Creating container: default/busybox/busybox" id=2b4379ea-d8e1-4588-98dc-ea99698fba59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.178948806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.185673537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.186204316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.20414372Z" level=info msg="Created container 5dd800b50cf6e78490cb8ed8132eef9c29058a84b32ae130f3022e5ef23a2fd2: default/busybox/busybox" id=2b4379ea-d8e1-4588-98dc-ea99698fba59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.204849098Z" level=info msg="Starting container: 5dd800b50cf6e78490cb8ed8132eef9c29058a84b32ae130f3022e5ef23a2fd2" id=2b5407fa-3df6-479e-8f90-eaa3c3ee8b0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:16:16 addons-178002 crio[832]: time="2025-10-26T08:16:16.206493054Z" level=info msg="Started container" PID=5049 containerID=5dd800b50cf6e78490cb8ed8132eef9c29058a84b32ae130f3022e5ef23a2fd2 description=default/busybox/busybox id=2b5407fa-3df6-479e-8f90-eaa3c3ee8b0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4118ae881c435dda09dba39e973b4f89f324581366756790963595213228a932
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5dd800b50cf6e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   4118ae881c435       busybox                                     default
	656a5504f6140       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	c400235dacf6d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 17 seconds ago       Running             gcp-auth                                 0                   0c83150fbe92a       gcp-auth-78565c9fb4-4bzxg                   gcp-auth
	6e68b380d42de       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          20 seconds ago       Running             csi-provisioner                          0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	cb1293525905b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            22 seconds ago       Running             liveness-probe                           0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	337cf8aa6fc1e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           22 seconds ago       Running             hostpath                                 0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	4bfbc1f9f76f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                24 seconds ago       Running             node-driver-registrar                    0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	b40d381cbc670       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             25 seconds ago       Running             controller                               0                   7fd4481cf3cfd       ingress-nginx-controller-675c5ddd98-jfslq   ingress-nginx
	11cc83ea16cf9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            32 seconds ago       Running             gadget                                   0                   02cbd878753c5       gadget-fpvwp                                gadget
	65ec8b0fd7784       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   36 seconds ago       Exited              patch                                    0                   2b1bb31d3a927       gcp-auth-certs-patch-5t7dq                  gcp-auth
	32c75b62416fd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   36 seconds ago       Exited              create                                   0                   fa58c6f5fcb8c       gcp-auth-certs-create-l296l                 gcp-auth
	11f21105b6321       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              37 seconds ago       Running             registry-proxy                           0                   822b63ef6af37       registry-proxy-n9gsn                        kube-system
	9ac72e95bdbb9       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             40 seconds ago       Running             csi-attacher                             0                   cdc504816b9c8       csi-hostpath-attacher-0                     kube-system
	e01421ba1d79e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   41 seconds ago       Running             csi-external-health-monitor-controller   0                   e3fde5e44b9cd       csi-hostpathplugin-zbhlb                    kube-system
	7cb1110433e18       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               43 seconds ago       Running             minikube-ingress-dns                     0                   42636bb7ea5d8       kube-ingress-dns-minikube                   kube-system
	93c19aa863bec       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             50 seconds ago       Exited              patch                                    2                   fadfe17ca2102       ingress-nginx-admission-patch-9d9jx         ingress-nginx
	90f02169324ed       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              52 seconds ago       Running             yakd                                     0                   1cf41d655b554       yakd-dashboard-5ff678cb9-pr2hf              yakd-dashboard
	6f48f953a8791       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           56 seconds ago       Running             registry                                 0                   99d67686ac378       registry-6b586f9694-t9spk                   kube-system
	ff820531f07c3       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               57 seconds ago       Running             cloud-spanner-emulator                   0                   9e9ca2665b361       cloud-spanner-emulator-86bd5cbb97-kbp57     default
	29c4bdd09e074       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   1c5bffbc201d9       ingress-nginx-admission-create-thdtm        ingress-nginx
	0c22b9c32c7f1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   8f521adb78053       local-path-provisioner-648f6765c9-ftt78     local-path-storage
	2d44eec32cccd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   4406843563bb0       snapshot-controller-7d9fbc56b8-b6xk7        kube-system
	293368e4d2e35       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   eea3a34db48e7       metrics-server-85b7d694d7-bgt5w             kube-system
	b5798323fc825       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   8288897b29078       csi-hostpath-resizer-0                      kube-system
	610b6f1646fb9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   1622d406430d5       snapshot-controller-7d9fbc56b8-2ppj6        kube-system
	3289a391ffd5d       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   34fbc96fd08e6       nvidia-device-plugin-daemonset-b6795        kube-system
	55db9ad7dfb08       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   94ddfeaf4fb61       coredns-66bc5c9577-hbh8d                    kube-system
	c3689b3808378       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   6e62dce330317       storage-provisioner                         kube-system
	8c52f0a3eb944       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   8975e7eced745       kube-proxy-s87tq                            kube-system
	ed2c281df9eab       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   0a9940c9b31ec       kindnet-bmsbv                               kube-system
	e28a155094997       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   1be1e494e8772       kube-scheduler-addons-178002                kube-system
	a8be4f8cce6ed       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   a4e9f4835f54f       kube-controller-manager-addons-178002       kube-system
	a0394733465ef       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   f8cc8b9325162       kube-apiserver-addons-178002                kube-system
	6bd1c5cde2562       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   d03a17fc541b7       etcd-addons-178002                          kube-system
	
	
	==> coredns [55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794] <==
	[INFO] 10.244.0.7:47496 - 62242 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00007237s
	[INFO] 10.244.0.7:47496 - 1801 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002092075s
	[INFO] 10.244.0.7:47496 - 2164 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002043598s
	[INFO] 10.244.0.7:47496 - 55186 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000137675s
	[INFO] 10.244.0.7:47496 - 8160 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000089166s
	[INFO] 10.244.0.7:59374 - 26720 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178193s
	[INFO] 10.244.0.7:59374 - 26498 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071878s
	[INFO] 10.244.0.7:51433 - 19960 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106848s
	[INFO] 10.244.0.7:51433 - 19516 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000195883s
	[INFO] 10.244.0.7:48015 - 10753 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108063s
	[INFO] 10.244.0.7:48015 - 10556 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083069s
	[INFO] 10.244.0.7:43807 - 40751 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001356264s
	[INFO] 10.244.0.7:43807 - 40964 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001527959s
	[INFO] 10.244.0.7:54908 - 64161 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158984s
	[INFO] 10.244.0.7:54908 - 63972 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076522s
	[INFO] 10.244.0.21:59030 - 11727 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182632s
	[INFO] 10.244.0.21:34008 - 33059 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000129634s
	[INFO] 10.244.0.21:45650 - 17774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163596s
	[INFO] 10.244.0.21:44278 - 47069 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155506s
	[INFO] 10.244.0.21:34621 - 31222 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00020312s
	[INFO] 10.244.0.21:37742 - 62714 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000206993s
	[INFO] 10.244.0.21:52412 - 26371 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001902305s
	[INFO] 10.244.0.21:40266 - 55033 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002141175s
	[INFO] 10.244.0.21:35362 - 21519 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002385863s
	[INFO] 10.244.0.21:36492 - 4485 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002826549s
	
	
	==> describe nodes <==
	Name:               addons-178002
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-178002
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=addons-178002
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_14_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-178002
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-178002"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:14:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-178002
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:16:15 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:16:15 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:16:15 +0000   Sun, 26 Oct 2025 08:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:16:15 +0000   Sun, 26 Oct 2025 08:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-178002
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                22814fea-4664-4b05-819b-2c2b8600c797
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-kbp57      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-fpvwp                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gcp-auth                    gcp-auth-78565c9fb4-4bzxg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jfslq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m1s
	  kube-system                 coredns-66bc5c9577-hbh8d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m7s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpathplugin-zbhlb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 etcd-addons-178002                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m13s
	  kube-system                 kindnet-bmsbv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-addons-178002                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-178002        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-s87tq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-addons-178002                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 metrics-server-85b7d694d7-bgt5w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-b6795         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-6b586f9694-t9spk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-c4cvz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-proxy-n9gsn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 snapshot-controller-7d9fbc56b8-2ppj6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-b6xk7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  local-path-storage          local-path-provisioner-648f6765c9-ftt78      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pr2hf               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node addons-178002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node addons-178002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node addons-178002 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m12s                  kubelet          Node addons-178002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s                  kubelet          Node addons-178002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s                  kubelet          Node addons-178002 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m8s                   node-controller  Node addons-178002 event: Registered Node addons-178002 in Controller
	  Normal   NodeReady                86s                    kubelet          Node addons-178002 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014214] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501900] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033459] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.999923] kauditd_printk_skb: 36 callbacks suppressed
	[Oct26 08:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 08:14] overlayfs: idmapped layers are currently not supported
	[  +0.063904] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f] <==
	{"level":"warn","ts":"2025-10-26T08:14:08.415475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.439975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.474530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.522813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.575074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.597207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.642199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.679834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.716176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.767027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.781302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.825238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.879325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.911720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.935079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:08.978812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.003323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.022838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:09.119825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:25.441501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:25.459257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.190705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.224137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.254292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:14:47.272544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49262","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c400235dacf6d8d71200444627a71086d5212c6c6fe543d1b5f0bb91ca5d6b2b] <==
	2025/10/26 08:16:07 GCP Auth Webhook started!
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	2025/10/26 08:16:13 Ready to marshal response ...
	2025/10/26 08:16:13 Ready to write response ...
	
	
	==> kernel <==
	 08:16:25 up  1:58,  0 user,  load average: 2.50, 3.35, 3.68
	Linux addons-178002 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d] <==
	E1026 08:14:49.112863       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 08:14:49.112873       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 08:14:49.112909       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 08:14:49.112982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 08:14:50.618071       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:14:50.618103       1 metrics.go:72] Registering metrics
	I1026 08:14:50.618186       1 controller.go:711] "Syncing nftables rules"
	I1026 08:14:59.118065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:14:59.118125       1 main.go:301] handling current node
	I1026 08:15:09.115792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:09.115822       1 main.go:301] handling current node
	I1026 08:15:19.110961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:19.111035       1 main.go:301] handling current node
	I1026 08:15:29.111449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:29.111495       1 main.go:301] handling current node
	I1026 08:15:39.111888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:39.111918       1 main.go:301] handling current node
	I1026 08:15:49.111381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:49.111409       1 main.go:301] handling current node
	I1026 08:15:59.111559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:15:59.111629       1 main.go:301] handling current node
	I1026 08:16:09.111881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:16:09.111910       1 main.go:301] handling current node
	I1026 08:16:19.111733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:16:19.111772       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d] <==
	I1026 08:14:24.907601       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1026 08:14:24.980829       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.107.175.54"}
	W1026 08:14:25.441538       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:25.456548       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1026 08:14:28.274774       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.111.97"}
	W1026 08:14:47.184681       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.214835       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.254184       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:47.269608       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1026 08:14:59.537397       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.537449       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	W1026 08:14:59.537914       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.537949       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	W1026 08:14:59.626864       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.111.97:443: connect: connection refused
	E1026 08:14:59.626912       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.111.97:443: connect: connection refused" logger="UnhandledError"
	E1026 08:15:22.722067       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.151.121:443: connect: connection refused" logger="UnhandledError"
	W1026 08:15:22.722379       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 08:15:22.722485       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 08:15:22.766586       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 08:16:23.083736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45008: use of closed network connection
	E1026 08:16:23.352171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45026: use of closed network connection
	E1026 08:16:23.506149       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45056: use of closed network connection
	
	
	==> kube-controller-manager [a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56] <==
	I1026 08:14:17.169510       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:14:17.169526       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:14:17.175709       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:14:17.205167       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 08:14:17.205304       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:14:17.205374       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:14:17.205405       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 08:14:17.205448       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:14:17.208624       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 08:14:17.209321       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:14:17.210857       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:14:17.211011       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:14:17.214473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:14:17.215421       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-178002" podCIDRs=["10.244.0.0/24"]
	E1026 08:14:23.472775       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 08:14:47.173543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 08:14:47.173702       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 08:14:47.173748       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 08:14:47.228591       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 08:14:47.234944       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 08:14:47.273859       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:14:48.335790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:15:02.167964       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1026 08:15:17.280205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 08:15:18.347539       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1] <==
	I1026 08:14:20.150608       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:14:20.280417       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:14:20.398289       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:14:20.398319       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:14:20.398398       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:14:20.475729       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:14:20.475781       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:14:20.491720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:14:20.492160       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:14:20.492182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:14:20.493545       1 config.go:200] "Starting service config controller"
	I1026 08:14:20.493566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:14:20.493582       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:14:20.493587       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:14:20.493597       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:14:20.493607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:14:20.494276       1 config.go:309] "Starting node config controller"
	I1026 08:14:20.494290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:14:20.494296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:14:20.594114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:14:20.594155       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:14:20.594188       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557] <==
	E1026 08:14:10.284512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:14:10.284613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:14:10.284681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:14:10.285165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:14:10.285238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:14:10.285289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:14:10.285333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:14:10.285388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:14:10.285430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:14:10.285493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:14:10.285540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:14:10.285586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:14:10.285609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:14:10.291179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:14:10.291538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:14:10.291644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:14:10.291734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:14:11.121865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:14:11.190574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:14:11.251077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:14:11.276694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:14:11.306600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:14:11.382166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:14:11.760688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 08:14:14.844478       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:15:48 addons-178002 kubelet[1283]: I1026 08:15:48.868178    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-n9gsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:15:49 addons-178002 kubelet[1283]: I1026 08:15:49.875666    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-n9gsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:15:49 addons-178002 kubelet[1283]: I1026 08:15:49.956565    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-n9gsn" podStartSLOduration=4.106848732 podStartE2EDuration="50.95654332s" podCreationTimestamp="2025-10-26 08:14:59 +0000 UTC" firstStartedPulling="2025-10-26 08:15:01.128770993 +0000 UTC m=+48.135810022" lastFinishedPulling="2025-10-26 08:15:47.978465573 +0000 UTC m=+94.985504610" observedRunningTime="2025-10-26 08:15:48.908356263 +0000 UTC m=+95.915395317" watchObservedRunningTime="2025-10-26 08:15:49.95654332 +0000 UTC m=+96.963582382"
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.070986    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xt2t\" (UniqueName: \"kubernetes.io/projected/7a1aad52-1431-451a-823c-e6f9a3cf3f93-kube-api-access-6xt2t\") pod \"7a1aad52-1431-451a-823c-e6f9a3cf3f93\" (UID: \"7a1aad52-1431-451a-823c-e6f9a3cf3f93\") "
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.073626    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1aad52-1431-451a-823c-e6f9a3cf3f93-kube-api-access-6xt2t" (OuterVolumeSpecName: "kube-api-access-6xt2t") pod "7a1aad52-1431-451a-823c-e6f9a3cf3f93" (UID: "7a1aad52-1431-451a-823c-e6f9a3cf3f93"). InnerVolumeSpecName "kube-api-access-6xt2t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.171446    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn9dp\" (UniqueName: \"kubernetes.io/projected/5f549238-6cc4-49b1-925a-771e3dd6b153-kube-api-access-kn9dp\") pod \"5f549238-6cc4-49b1-925a-771e3dd6b153\" (UID: \"5f549238-6cc4-49b1-925a-771e3dd6b153\") "
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.171998    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6xt2t\" (UniqueName: \"kubernetes.io/projected/7a1aad52-1431-451a-823c-e6f9a3cf3f93-kube-api-access-6xt2t\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.175570    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f549238-6cc4-49b1-925a-771e3dd6b153-kube-api-access-kn9dp" (OuterVolumeSpecName: "kube-api-access-kn9dp") pod "5f549238-6cc4-49b1-925a-771e3dd6b153" (UID: "5f549238-6cc4-49b1-925a-771e3dd6b153"). InnerVolumeSpecName "kube-api-access-kn9dp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.272545    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kn9dp\" (UniqueName: \"kubernetes.io/projected/5f549238-6cc4-49b1-925a-771e3dd6b153-kube-api-access-kn9dp\") on node \"addons-178002\" DevicePath \"\""
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.880427    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1bb31d3a92711c28072009ed7a1b9810a0d0010d674a19548006a54bb436d4"
	Oct 26 08:15:50 addons-178002 kubelet[1283]: I1026 08:15:50.884957    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa58c6f5fcb8c13f647502377236d121ed551a2150882b7ce84b37c7a9c784ac"
	Oct 26 08:15:52 addons-178002 kubelet[1283]: I1026 08:15:52.910165    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-fpvwp" podStartSLOduration=65.200232352 podStartE2EDuration="1m28.910148475s" podCreationTimestamp="2025-10-26 08:14:24 +0000 UTC" firstStartedPulling="2025-10-26 08:15:28.908465664 +0000 UTC m=+75.915504701" lastFinishedPulling="2025-10-26 08:15:52.618381787 +0000 UTC m=+99.625420824" observedRunningTime="2025-10-26 08:15:52.908772912 +0000 UTC m=+99.915811957" watchObservedRunningTime="2025-10-26 08:15:52.910148475 +0000 UTC m=+99.917187520"
	Oct 26 08:16:03 addons-178002 kubelet[1283]: I1026 08:16:03.324786    1283 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 26 08:16:03 addons-178002 kubelet[1283]: I1026 08:16:03.325621    1283 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 26 08:16:03 addons-178002 kubelet[1283]: E1026 08:16:03.636626    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 26 08:16:03 addons-178002 kubelet[1283]: E1026 08:16:03.636717    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d49d499e-2d32-44e8-8b7d-61e797375c41-gcr-creds podName:d49d499e-2d32-44e8-8b7d-61e797375c41 nodeName:}" failed. No retries permitted until 2025-10-26 08:17:07.636700048 +0000 UTC m=+174.643739085 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d49d499e-2d32-44e8-8b7d-61e797375c41-gcr-creds") pod "registry-creds-764b6fb674-c4cvz" (UID: "d49d499e-2d32-44e8-8b7d-61e797375c41") : secret "registry-creds-gcr" not found
	Oct 26 08:16:08 addons-178002 kubelet[1283]: I1026 08:16:08.027443    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-jfslq" podStartSLOduration=77.117190049 podStartE2EDuration="1m44.027411342s" podCreationTimestamp="2025-10-26 08:14:24 +0000 UTC" firstStartedPulling="2025-10-26 08:15:32.802875688 +0000 UTC m=+79.809914725" lastFinishedPulling="2025-10-26 08:15:59.713096981 +0000 UTC m=+106.720136018" observedRunningTime="2025-10-26 08:15:59.957073781 +0000 UTC m=+106.964112818" watchObservedRunningTime="2025-10-26 08:16:08.027411342 +0000 UTC m=+115.034450379"
	Oct 26 08:16:10 addons-178002 kubelet[1283]: I1026 08:16:10.088549    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4bzxg" podStartSLOduration=98.123687841 podStartE2EDuration="1m42.088529825s" podCreationTimestamp="2025-10-26 08:14:28 +0000 UTC" firstStartedPulling="2025-10-26 08:16:03.871839723 +0000 UTC m=+110.878878768" lastFinishedPulling="2025-10-26 08:16:07.836681715 +0000 UTC m=+114.843720752" observedRunningTime="2025-10-26 08:16:08.031221835 +0000 UTC m=+115.038260905" watchObservedRunningTime="2025-10-26 08:16:10.088529825 +0000 UTC m=+117.095568879"
	Oct 26 08:16:10 addons-178002 kubelet[1283]: I1026 08:16:10.089329    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-zbhlb" podStartSLOduration=2.512482046 podStartE2EDuration="1m11.089317386s" podCreationTimestamp="2025-10-26 08:14:59 +0000 UTC" firstStartedPulling="2025-10-26 08:15:01.042588196 +0000 UTC m=+48.049627233" lastFinishedPulling="2025-10-26 08:16:09.619423536 +0000 UTC m=+116.626462573" observedRunningTime="2025-10-26 08:16:10.083843729 +0000 UTC m=+117.090882783" watchObservedRunningTime="2025-10-26 08:16:10.089317386 +0000 UTC m=+117.096356431"
	Oct 26 08:16:13 addons-178002 kubelet[1283]: I1026 08:16:13.938744    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5w8\" (UniqueName: \"kubernetes.io/projected/73b86850-50e4-406d-ba5a-cbf3c70b1a29-kube-api-access-qp5w8\") pod \"busybox\" (UID: \"73b86850-50e4-406d-ba5a-cbf3c70b1a29\") " pod="default/busybox"
	Oct 26 08:16:13 addons-178002 kubelet[1283]: I1026 08:16:13.938805    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/73b86850-50e4-406d-ba5a-cbf3c70b1a29-gcp-creds\") pod \"busybox\" (UID: \"73b86850-50e4-406d-ba5a-cbf3c70b1a29\") " pod="default/busybox"
	Oct 26 08:16:19 addons-178002 kubelet[1283]: I1026 08:16:19.157475    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b6795" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 08:16:21 addons-178002 kubelet[1283]: I1026 08:16:21.043941    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=6.039506394 podStartE2EDuration="8.043921404s" podCreationTimestamp="2025-10-26 08:16:13 +0000 UTC" firstStartedPulling="2025-10-26 08:16:14.166908362 +0000 UTC m=+121.173947399" lastFinishedPulling="2025-10-26 08:16:16.171323372 +0000 UTC m=+123.178362409" observedRunningTime="2025-10-26 08:16:17.120601872 +0000 UTC m=+124.127640917" watchObservedRunningTime="2025-10-26 08:16:21.043921404 +0000 UTC m=+128.050960441"
	Oct 26 08:16:21 addons-178002 kubelet[1283]: I1026 08:16:21.160073    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f549238-6cc4-49b1-925a-771e3dd6b153" path="/var/lib/kubelet/pods/5f549238-6cc4-49b1-925a-771e3dd6b153/volumes"
	Oct 26 08:16:21 addons-178002 kubelet[1283]: I1026 08:16:21.160462    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1aad52-1431-451a-823c-e6f9a3cf3f93" path="/var/lib/kubelet/pods/7a1aad52-1431-451a-823c-e6f9a3cf3f93/volumes"
	
	
	==> storage-provisioner [c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7] <==
	W1026 08:16:01.643591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:03.649171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:03.654046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:05.656638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:05.661336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:07.664761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:07.669692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:09.672673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:09.737117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:11.740802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:11.745357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:13.748685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:13.753702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:15.756860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:15.761423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:17.764258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:17.771064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:19.774337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:19.778818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:21.782420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:21.787228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:23.791279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:23.796091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:25.799210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:16:25.806954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-178002 -n addons-178002
helpers_test.go:269: (dbg) Run:  kubectl --context addons-178002 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx registry-creds-764b6fb674-c4cvz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx registry-creds-764b6fb674-c4cvz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx registry-creds-764b6fb674-c4cvz: exit status 1 (87.796108ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-thdtm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9d9jx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-c4cvz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-178002 describe pod ingress-nginx-admission-create-thdtm ingress-nginx-admission-patch-9d9jx registry-creds-764b6fb674-c4cvz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable headlamp --alsologtostderr -v=1: exit status 11 (267.531784ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:16:26.759001  302876 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:16:26.760032  302876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:26.760077  302876 out.go:374] Setting ErrFile to fd 2...
	I1026 08:16:26.760099  302876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:16:26.760395  302876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:16:26.760717  302876 mustload.go:65] Loading cluster: addons-178002
	I1026 08:16:26.761157  302876 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:26.761201  302876 addons.go:606] checking whether the cluster is paused
	I1026 08:16:26.761329  302876 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:16:26.761361  302876 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:16:26.761916  302876 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:16:26.779525  302876 ssh_runner.go:195] Run: systemctl --version
	I1026 08:16:26.779579  302876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:16:26.797084  302876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:16:26.905839  302876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:16:26.906005  302876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:16:26.938188  302876 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:16:26.938208  302876 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:16:26.938212  302876 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:16:26.938216  302876 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:16:26.938219  302876 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:16:26.938223  302876 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:16:26.938226  302876 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:16:26.938230  302876 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:16:26.938233  302876 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:16:26.938244  302876 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:16:26.938248  302876 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:16:26.938251  302876 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:16:26.938254  302876 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:16:26.938311  302876 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:16:26.938318  302876 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:16:26.938323  302876 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:16:26.938327  302876 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:16:26.938332  302876 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:16:26.938365  302876 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:16:26.938372  302876 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:16:26.938378  302876 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:16:26.938381  302876 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:16:26.938384  302876 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:16:26.938387  302876 cri.go:89] found id: ""
	I1026 08:16:26.938479  302876 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:16:26.957581  302876 out.go:203] 
	W1026 08:16:26.960513  302876 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:16:26.960544  302876 out.go:285] * 
	* 
	W1026 08:16:26.966892  302876 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:16:26.969872  302876 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-kbp57" [2ad5691d-5a6e-49f3-92f7-9e4fa36000b1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003770933s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (260.167006ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:43.007438  304815 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:43.008314  304815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:43.008347  304815 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:43.008354  304815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:43.008675  304815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:43.009061  304815 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:43.009528  304815 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:43.009554  304815 addons.go:606] checking whether the cluster is paused
	I1026 08:17:43.009665  304815 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:43.009675  304815 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:43.010198  304815 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:43.030034  304815 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:43.030086  304815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:43.047968  304815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:43.157409  304815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:43.157498  304815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:43.190056  304815 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:43.190132  304815 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:43.190153  304815 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:43.190182  304815 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:43.190204  304815 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:43.190223  304815 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:43.190242  304815 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:43.190261  304815 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:43.190280  304815 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:43.190302  304815 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:43.190322  304815 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:43.190341  304815 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:43.190359  304815 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:43.190378  304815 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:43.190396  304815 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:43.190423  304815 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:43.190451  304815 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:43.190472  304815 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:43.190492  304815 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:43.190511  304815 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:43.190545  304815 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:43.190565  304815 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:43.190584  304815 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:43.190601  304815 cri.go:89] found id: ""
	I1026 08:17:43.190677  304815 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:43.206145  304815 out.go:203] 
	W1026 08:17:43.209059  304815 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:43.209085  304815 out.go:285] * 
	* 
	W1026 08:17:43.215530  304815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:43.218460  304815 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-178002 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-178002 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d64bbf31-563d-4e48-94d9-a5f3abb48559] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d64bbf31-563d-4e48-94d9-a5f3abb48559] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d64bbf31-563d-4e48-94d9-a5f3abb48559] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004215461s
addons_test.go:967: (dbg) Run:  kubectl --context addons-178002 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 ssh "cat /opt/local-path-provisioner/pvc-8792530b-a4c3-4092-b81e-3346c6acb3ac_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-178002 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-178002 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.015491ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:36.712581  304712 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:36.713490  304712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:36.713532  304712 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:36.713555  304712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:36.713850  304712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:36.714184  304712 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:36.714624  304712 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:36.714666  304712 addons.go:606] checking whether the cluster is paused
	I1026 08:17:36.714837  304712 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:36.714873  304712 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:36.715387  304712 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:36.734529  304712 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:36.734588  304712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:36.761970  304712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:36.866452  304712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:36.866554  304712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:36.897871  304712 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:36.897953  304712 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:36.897974  304712 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:36.898003  304712 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:36.898023  304712 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:36.898046  304712 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:36.898065  304712 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:36.898085  304712 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:36.898105  304712 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:36.898128  304712 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:36.898157  304712 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:36.898176  304712 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:36.898195  304712 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:36.898215  304712 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:36.898236  304712 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:36.898257  304712 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:36.898287  304712 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:36.898311  304712 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:36.898329  304712 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:36.898349  304712 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:36.898374  304712 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:36.898394  304712 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:36.898439  304712 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:36.898460  304712 cri.go:89] found id: ""
	I1026 08:17:36.898530  304712 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:36.928181  304712 out.go:203] 
	W1026 08:17:36.931226  304712 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:36.931250  304712 out.go:285] * 
	* 
	W1026 08:17:36.937687  304712 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:36.940870  304712 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b6795" [a5818f79-5cd3-4628-a82d-9d6cc170dc87] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002867874s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (261.187183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:21.969032  304347 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:21.969789  304347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:21.969807  304347 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:21.969815  304347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:21.970110  304347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:21.970451  304347 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:21.970917  304347 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:21.970939  304347 addons.go:606] checking whether the cluster is paused
	I1026 08:17:21.971080  304347 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:21.971098  304347 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:21.971671  304347 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:21.989788  304347 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:21.989867  304347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:22.010528  304347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:22.117315  304347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:22.117402  304347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:22.148667  304347 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:22.148711  304347 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:22.148716  304347 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:22.148721  304347 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:22.148725  304347 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:22.148728  304347 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:22.148732  304347 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:22.148735  304347 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:22.148738  304347 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:22.148766  304347 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:22.148776  304347 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:22.148779  304347 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:22.148782  304347 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:22.148785  304347 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:22.148788  304347 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:22.148798  304347 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:22.148806  304347 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:22.148811  304347 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:22.148814  304347 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:22.148818  304347 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:22.148824  304347 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:22.148827  304347 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:22.148850  304347 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:22.148855  304347 cri.go:89] found id: ""
	I1026 08:17:22.148931  304347 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:22.164838  304347 out.go:203] 
	W1026 08:17:22.167772  304347 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:22.167798  304347 out.go:285] * 
	* 
	W1026 08:17:22.174380  304347 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:22.177403  304347 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pr2hf" [bdd532eb-2642-4e10-bf53-50d8789cc3d6] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00375285s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-178002 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-178002 addons disable yakd --alsologtostderr -v=1: exit status 11 (269.849124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:17:28.243459  304409 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:17:28.244372  304409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:28.244390  304409 out.go:374] Setting ErrFile to fd 2...
	I1026 08:17:28.244396  304409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:17:28.244659  304409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:17:28.244960  304409 mustload.go:65] Loading cluster: addons-178002
	I1026 08:17:28.245374  304409 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:28.245394  304409 addons.go:606] checking whether the cluster is paused
	I1026 08:17:28.245502  304409 config.go:182] Loaded profile config "addons-178002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:17:28.245518  304409 host.go:66] Checking if "addons-178002" exists ...
	I1026 08:17:28.245972  304409 cli_runner.go:164] Run: docker container inspect addons-178002 --format={{.State.Status}}
	I1026 08:17:28.266997  304409 ssh_runner.go:195] Run: systemctl --version
	I1026 08:17:28.267076  304409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-178002
	I1026 08:17:28.286921  304409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/addons-178002/id_rsa Username:docker}
	I1026 08:17:28.389791  304409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:17:28.389888  304409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:17:28.425250  304409 cri.go:89] found id: "656a5504f614055908fe465d89ebee1a3f243d30b9a7ae323b6c7143791e27a0"
	I1026 08:17:28.425269  304409 cri.go:89] found id: "6e68b380d42de87f16e145e259f53d1ad909dd39159177e76ac04dfd15c8b08b"
	I1026 08:17:28.425276  304409 cri.go:89] found id: "cb1293525905bf6cec197d61d0b2ce51172ed5b2d93f1d0e16f4629d4bfce19b"
	I1026 08:17:28.425285  304409 cri.go:89] found id: "337cf8aa6fc1e2b8df2a4afabea0407f74f00141e1e2a5cd17039226887e1c99"
	I1026 08:17:28.425288  304409 cri.go:89] found id: "4bfbc1f9f76f81672a58f45c2aac1e75d74ce6f4bef6bc6554152636565a99bc"
	I1026 08:17:28.425292  304409 cri.go:89] found id: "11f21105b63211f851898fddb160a8f978a7fb2e1c0b0fe74f772f85654a3477"
	I1026 08:17:28.425295  304409 cri.go:89] found id: "9ac72e95bdbb96f1e8ff94310598b8c97efb7773874f7dfc450625170073c711"
	I1026 08:17:28.425298  304409 cri.go:89] found id: "e01421ba1d79e894813a846a7c3f4669dcb3eb735347304043e51099cf81e7df"
	I1026 08:17:28.425301  304409 cri.go:89] found id: "7cb1110433e1885db4ddb8d881840c0f3aa1341bcd20c69fcd48cd891fd13cf4"
	I1026 08:17:28.425311  304409 cri.go:89] found id: "6f48f953a87914ea3d47cb9c653fb9832746e021e723fa5d84d67f3c5642f550"
	I1026 08:17:28.425315  304409 cri.go:89] found id: "2d44eec32cccd866f31f313a5340180f6b873c3c6ba30e12a4800eaa635c3107"
	I1026 08:17:28.425318  304409 cri.go:89] found id: "293368e4d2e3591e40ae58b1eff43e2bbd6c77a4a05dbf39f72a68f6e72d643c"
	I1026 08:17:28.425321  304409 cri.go:89] found id: "b5798323fc8259676675c400c7efde5df267d978a5ab5bb4dc1ec74573806af1"
	I1026 08:17:28.425324  304409 cri.go:89] found id: "610b6f1646fb993375a23584057189fb158f9359c33b6d492e0b5b1f347531cc"
	I1026 08:17:28.425327  304409 cri.go:89] found id: "3289a391ffd5dd63e594c95c4666ad4b059810c9fd2f2fba7bc3762c78de61d9"
	I1026 08:17:28.425345  304409 cri.go:89] found id: "55db9ad7dfb08e9f0320dcb96a76fb3888a98bde8d797578877bfaa908229794"
	I1026 08:17:28.425348  304409 cri.go:89] found id: "c3689b380837844bbf8bf80fbdd61cd92013c7062cd11bd303dca8bac954bbb7"
	I1026 08:17:28.425352  304409 cri.go:89] found id: "8c52f0a3eb9444ad9aa04ccd4894cc21c17adec675e57ee146d88e88567e25e1"
	I1026 08:17:28.425355  304409 cri.go:89] found id: "ed2c281df9eabc689cb85522061920747997498291bb059381d2572ebd99d08d"
	I1026 08:17:28.425358  304409 cri.go:89] found id: "e28a155094997fadce41d0130c4ffe1026a0875b48086a3716350dbc79bf6557"
	I1026 08:17:28.425363  304409 cri.go:89] found id: "a8be4f8cce6ede35fc23d01ceba62e090b269309ef0233edfacb2b095a64ee56"
	I1026 08:17:28.425366  304409 cri.go:89] found id: "a0394733465ef2b8cfcc77b59f593e93f2b1b9ed0fde79392396bafed74e814d"
	I1026 08:17:28.425369  304409 cri.go:89] found id: "6bd1c5cde256244fcead44205b2d163af3fc1af6f6104d5ad453eb7c886e516f"
	I1026 08:17:28.425372  304409 cri.go:89] found id: ""
	I1026 08:17:28.425421  304409 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:17:28.442094  304409 out.go:203] 
	W1026 08:17:28.444934  304409 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:17:28.445001  304409 out.go:285] * 
	* 
	W1026 08:17:28.451868  304409 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:17:28.456605  304409 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-178002 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-622437 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-622437 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cq7c7" [95744394-f3de-4cdc-8398-e873ed7d02e9] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-cq7c7" [95744394-f3de-4cdc-8398-e873ed7d02e9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-622437 -n functional-622437
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-26 08:34:08.737974369 +0000 UTC m=+1260.131461435
functional_test.go:1645: (dbg) Run:  kubectl --context functional-622437 describe po hello-node-connect-7d85dfc575-cq7c7 -n default
functional_test.go:1645: (dbg) kubectl --context functional-622437 describe po hello-node-connect-7d85dfc575-cq7c7 -n default:
Name:             hello-node-connect-7d85dfc575-cq7c7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622437/192.168.49.2
Start Time:       Sun, 26 Oct 2025 08:24:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-444kn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-444kn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cq7c7 to functional-622437
Normal   Pulling    7m4s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-622437 logs hello-node-connect-7d85dfc575-cq7c7 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-622437 logs hello-node-connect-7d85dfc575-cq7c7 -n default: exit status 1 (107.966153ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cq7c7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-622437 logs hello-node-connect-7d85dfc575-cq7c7 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-622437 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cq7c7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622437/192.168.49.2
Start Time:       Sun, 26 Oct 2025 08:24:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-444kn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-444kn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cq7c7 to functional-622437
Normal   Pulling    7m5s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-622437 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-622437 logs -l app=hello-node-connect: exit status 1 (111.502612ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cq7c7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-622437 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-622437 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.59.54
IPs:                      10.103.59.54
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31077/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-622437
helpers_test.go:243: (dbg) docker inspect functional-622437:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b",
	        "Created": "2025-10-26T08:20:39.40075549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311172,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:20:39.467410902Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b/hosts",
	        "LogPath": "/var/lib/docker/containers/c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b/c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b-json.log",
	        "Name": "/functional-622437",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-622437:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-622437",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c24663c5f08583d59dc40f6a98ce37891c7492dbae9434731753c176704b378b",
	                "LowerDir": "/var/lib/docker/overlay2/ad5dd9d91f8e75d091fce5dbbf00794ce098799a3f34283823e310ee387f5e90-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad5dd9d91f8e75d091fce5dbbf00794ce098799a3f34283823e310ee387f5e90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad5dd9d91f8e75d091fce5dbbf00794ce098799a3f34283823e310ee387f5e90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad5dd9d91f8e75d091fce5dbbf00794ce098799a3f34283823e310ee387f5e90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-622437",
	                "Source": "/var/lib/docker/volumes/functional-622437/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-622437",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-622437",
	                "name.minikube.sigs.k8s.io": "functional-622437",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1144ca916de9eef1994830d6e2a39a6981510c82450205828144c6e03f9cafd4",
	            "SandboxKey": "/var/run/docker/netns/1144ca916de9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-622437": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:c0:3c:fc:11:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d35e42682107ed5e1d25d4d557b6098951b3448810785e5e2a5779e9dbd25e39",
	                    "EndpointID": "c1dc47461eb5f3638e1746cbcc5dd5f1e1089cf2294b09aba105d2ef07dbf231",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-622437",
	                        "c24663c5f085"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-622437 -n functional-622437
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 logs -n 25: (1.534689056s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-622437 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:22 UTC │ 26 Oct 25 08:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 26 Oct 25 08:22 UTC │ 26 Oct 25 08:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 26 Oct 25 08:22 UTC │ 26 Oct 25 08:22 UTC │
	│ kubectl │ functional-622437 kubectl -- --context functional-622437 get pods                                                          │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:22 UTC │ 26 Oct 25 08:22 UTC │
	│ start   │ -p functional-622437 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:22 UTC │ 26 Oct 25 08:23 UTC │
	│ service │ invalid-svc -p functional-622437                                                                                           │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ config  │ functional-622437 config unset cpus                                                                                        │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ cp      │ functional-622437 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ config  │ functional-622437 config get cpus                                                                                          │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ config  │ functional-622437 config set cpus 2                                                                                        │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ config  │ functional-622437 config get cpus                                                                                          │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ config  │ functional-622437 config unset cpus                                                                                        │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ ssh     │ functional-622437 ssh -n functional-622437 sudo cat /home/docker/cp-test.txt                                               │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ config  │ functional-622437 config get cpus                                                                                          │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ ssh     │ functional-622437 ssh echo hello                                                                                           │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ cp      │ functional-622437 cp functional-622437:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2686172398/001/cp-test.txt │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ ssh     │ functional-622437 ssh cat /etc/hostname                                                                                    │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ ssh     │ functional-622437 ssh -n functional-622437 sudo cat /home/docker/cp-test.txt                                               │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ tunnel  │ functional-622437 tunnel --alsologtostderr                                                                                 │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ tunnel  │ functional-622437 tunnel --alsologtostderr                                                                                 │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ cp      │ functional-622437 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ ssh     │ functional-622437 ssh -n functional-622437 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │ 26 Oct 25 08:23 UTC │
	│ tunnel  │ functional-622437 tunnel --alsologtostderr                                                                                 │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:23 UTC │                     │
	│ addons  │ functional-622437 addons list                                                                                              │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ addons  │ functional-622437 addons list -o json                                                                                      │ functional-622437 │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:22:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:22:30.607176  315307 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:22:30.607270  315307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:22:30.607274  315307 out.go:374] Setting ErrFile to fd 2...
	I1026 08:22:30.607284  315307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:22:30.607613  315307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:22:30.608070  315307 out.go:368] Setting JSON to false
	I1026 08:22:30.608949  315307 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7501,"bootTime":1761459450,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:22:30.609026  315307 start.go:141] virtualization:  
	I1026 08:22:30.612483  315307 out.go:179] * [functional-622437] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:22:30.616283  315307 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:22:30.616461  315307 notify.go:220] Checking for updates...
	I1026 08:22:30.622187  315307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:22:30.624976  315307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:22:30.627784  315307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:22:30.630831  315307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:22:30.633744  315307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:22:30.637311  315307 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:22:30.637404  315307 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:22:30.671274  315307 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:22:30.671401  315307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:22:30.729790  315307 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-26 08:22:30.720237659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:22:30.729881  315307 docker.go:318] overlay module found
	I1026 08:22:30.732887  315307 out.go:179] * Using the docker driver based on existing profile
	I1026 08:22:30.735768  315307 start.go:305] selected driver: docker
	I1026 08:22:30.735775  315307 start.go:925] validating driver "docker" against &{Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:22:30.735879  315307 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:22:30.735986  315307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:22:30.799696  315307 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-26 08:22:30.784688726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:22:30.800120  315307 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:22:30.800147  315307 cni.go:84] Creating CNI manager for ""
	I1026 08:22:30.800204  315307 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:22:30.800245  315307 start.go:349] cluster config:
	{Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:22:30.803255  315307 out.go:179] * Starting "functional-622437" primary control-plane node in "functional-622437" cluster
	I1026 08:22:30.806249  315307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:22:30.809093  315307 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:22:30.811908  315307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:22:30.811955  315307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:22:30.811963  315307 cache.go:58] Caching tarball of preloaded images
	I1026 08:22:30.812015  315307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:22:30.812064  315307 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:22:30.812072  315307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:22:30.812180  315307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/config.json ...
	I1026 08:22:30.830437  315307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:22:30.830449  315307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:22:30.830461  315307 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:22:30.830483  315307 start.go:360] acquireMachinesLock for functional-622437: {Name:mk09f700d456676db20124cbb8fa6b8312c6305a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:22:30.830533  315307 start.go:364] duration metric: took 34.979µs to acquireMachinesLock for "functional-622437"
	I1026 08:22:30.830552  315307 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:22:30.830556  315307 fix.go:54] fixHost starting: 
	I1026 08:22:30.830846  315307 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
	I1026 08:22:30.852901  315307 fix.go:112] recreateIfNeeded on functional-622437: state=Running err=<nil>
	W1026 08:22:30.852921  315307 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:22:30.856265  315307 out.go:252] * Updating the running docker "functional-622437" container ...
	I1026 08:22:30.856311  315307 machine.go:93] provisionDockerMachine start ...
	I1026 08:22:30.856396  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:30.873502  315307 main.go:141] libmachine: Using SSH client type: native
	I1026 08:22:30.873841  315307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1026 08:22:30.873848  315307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:22:31.022405  315307 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622437
	
	I1026 08:22:31.022439  315307 ubuntu.go:182] provisioning hostname "functional-622437"
	I1026 08:22:31.022510  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:31.042264  315307 main.go:141] libmachine: Using SSH client type: native
	I1026 08:22:31.042568  315307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1026 08:22:31.042578  315307 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622437 && echo "functional-622437" | sudo tee /etc/hostname
	I1026 08:22:31.204264  315307 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622437
	
	I1026 08:22:31.204333  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:31.222856  315307 main.go:141] libmachine: Using SSH client type: native
	I1026 08:22:31.223149  315307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1026 08:22:31.223163  315307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622437/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:22:31.374985  315307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:22:31.375012  315307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:22:31.375037  315307 ubuntu.go:190] setting up certificates
	I1026 08:22:31.375045  315307 provision.go:84] configureAuth start
	I1026 08:22:31.375109  315307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-622437
	I1026 08:22:31.392182  315307 provision.go:143] copyHostCerts
	I1026 08:22:31.392236  315307 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:22:31.392244  315307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:22:31.392319  315307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:22:31.392419  315307 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:22:31.392423  315307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:22:31.392446  315307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:22:31.392504  315307 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:22:31.392507  315307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:22:31.392528  315307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:22:31.392573  315307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.functional-622437 san=[127.0.0.1 192.168.49.2 functional-622437 localhost minikube]
	I1026 08:22:31.962953  315307 provision.go:177] copyRemoteCerts
	I1026 08:22:31.963005  315307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:22:31.963081  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:31.984925  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:22:32.092053  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:22:32.111308  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:22:32.128369  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 08:22:32.146001  315307 provision.go:87] duration metric: took 770.932933ms to configureAuth
	I1026 08:22:32.146018  315307 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:22:32.146210  315307 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:22:32.146315  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:32.166978  315307 main.go:141] libmachine: Using SSH client type: native
	I1026 08:22:32.167283  315307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1026 08:22:32.167308  315307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:22:37.538060  315307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:22:37.538074  315307 machine.go:96] duration metric: took 6.681756434s to provisionDockerMachine
	I1026 08:22:37.538083  315307 start.go:293] postStartSetup for "functional-622437" (driver="docker")
	I1026 08:22:37.538093  315307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:22:37.538160  315307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:22:37.538197  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:37.555617  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:22:37.658632  315307 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:22:37.662059  315307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:22:37.662078  315307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:22:37.662087  315307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:22:37.662144  315307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:22:37.662224  315307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:22:37.662297  315307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/test/nested/copy/295475/hosts -> hosts in /etc/test/nested/copy/295475
	I1026 08:22:37.662338  315307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/295475
	I1026 08:22:37.669901  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:22:37.687603  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/test/nested/copy/295475/hosts --> /etc/test/nested/copy/295475/hosts (40 bytes)
	I1026 08:22:37.705153  315307 start.go:296] duration metric: took 167.055646ms for postStartSetup
	I1026 08:22:37.705222  315307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:22:37.705261  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:37.722477  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:22:37.823952  315307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:22:37.828757  315307 fix.go:56] duration metric: took 6.998193617s for fixHost
	I1026 08:22:37.828773  315307 start.go:83] releasing machines lock for "functional-622437", held for 6.998232075s
	I1026 08:22:37.828853  315307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-622437
	I1026 08:22:37.845434  315307 ssh_runner.go:195] Run: cat /version.json
	I1026 08:22:37.845477  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:37.845764  315307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:22:37.845819  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:22:37.866706  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:22:37.866761  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:22:38.064922  315307 ssh_runner.go:195] Run: systemctl --version
	I1026 08:22:38.071746  315307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:22:38.111842  315307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:22:38.116404  315307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:22:38.116465  315307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:22:38.124557  315307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:22:38.124582  315307 start.go:495] detecting cgroup driver to use...
	I1026 08:22:38.124615  315307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:22:38.124671  315307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:22:38.140928  315307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:22:38.153848  315307 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:22:38.153902  315307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:22:38.170960  315307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:22:38.184449  315307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:22:38.320219  315307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:22:38.465171  315307 docker.go:234] disabling docker service ...
	I1026 08:22:38.465246  315307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:22:38.480508  315307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:22:38.493842  315307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:22:38.634137  315307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:22:38.776919  315307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:22:38.791340  315307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:22:38.808487  315307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:22:38.808547  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.817587  315307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:22:38.817646  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.827108  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.836791  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.846087  315307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:22:38.854645  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.864091  315307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.872845  315307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:22:38.882112  315307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:22:38.889940  315307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:22:38.897888  315307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:22:39.033257  315307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:22:47.130282  315307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.097000395s)
	I1026 08:22:47.130317  315307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:22:47.130369  315307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:22:47.134172  315307 start.go:563] Will wait 60s for crictl version
	I1026 08:22:47.134227  315307 ssh_runner.go:195] Run: which crictl
	I1026 08:22:47.137733  315307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:22:47.169129  315307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:22:47.169198  315307 ssh_runner.go:195] Run: crio --version
	I1026 08:22:47.196325  315307 ssh_runner.go:195] Run: crio --version
	I1026 08:22:47.233971  315307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:22:47.236930  315307 cli_runner.go:164] Run: docker network inspect functional-622437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:22:47.253500  315307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:22:47.260718  315307 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1026 08:22:47.263696  315307 kubeadm.go:883] updating cluster {Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:22:47.263820  315307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:22:47.263902  315307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:22:47.297673  315307 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:22:47.297692  315307 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:22:47.297752  315307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:22:47.325822  315307 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:22:47.325834  315307 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:22:47.325841  315307 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1026 08:22:47.325946  315307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-622437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:22:47.326028  315307 ssh_runner.go:195] Run: crio config
	I1026 08:22:47.396710  315307 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1026 08:22:47.396730  315307 cni.go:84] Creating CNI manager for ""
	I1026 08:22:47.396739  315307 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:22:47.396752  315307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:22:47.396774  315307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-622437 NodeName:functional-622437 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:22:47.396896  315307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-622437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:22:47.396961  315307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:22:47.404679  315307 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:22:47.404740  315307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:22:47.412283  315307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 08:22:47.424976  315307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:22:47.437174  315307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1026 08:22:47.449490  315307 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:22:47.453274  315307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:22:47.593753  315307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:22:47.608216  315307 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437 for IP: 192.168.49.2
	I1026 08:22:47.608227  315307 certs.go:195] generating shared ca certs ...
	I1026 08:22:47.608241  315307 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:22:47.608378  315307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:22:47.608412  315307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:22:47.608418  315307 certs.go:257] generating profile certs ...
	I1026 08:22:47.608498  315307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.key
	I1026 08:22:47.608536  315307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/apiserver.key.44f923a4
	I1026 08:22:47.608577  315307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/proxy-client.key
	I1026 08:22:47.608683  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:22:47.608713  315307 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:22:47.608720  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:22:47.608747  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:22:47.608767  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:22:47.608790  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:22:47.608835  315307 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:22:47.609408  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:22:47.629294  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:22:47.647549  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:22:47.665204  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:22:47.682688  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 08:22:47.700498  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:22:47.717990  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:22:47.734628  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:22:47.752309  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:22:47.769600  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:22:47.786863  315307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:22:47.803551  315307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:22:47.816744  315307 ssh_runner.go:195] Run: openssl version
	I1026 08:22:47.822933  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:22:47.831470  315307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:22:47.835071  315307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:22:47.835125  315307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:22:47.876268  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:22:47.883984  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:22:47.892212  315307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:22:47.895840  315307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:22:47.895892  315307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:22:47.936628  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:22:47.944515  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:22:47.953067  315307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:22:47.956803  315307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:22:47.956860  315307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:22:47.998885  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:22:48.006772  315307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:22:48.014325  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:22:48.058164  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:22:48.101634  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:22:48.143047  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:22:48.185284  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:22:48.231541  315307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:22:48.312109  315307 kubeadm.go:400] StartCluster: {Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:22:48.312186  315307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:22:48.312270  315307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:22:48.394279  315307 cri.go:89] found id: "6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668"
	I1026 08:22:48.394290  315307 cri.go:89] found id: "c1cd86f7fa12090ed9e22d844994d749c7c3df5b61153b5306ec5a7c984b6d2c"
	I1026 08:22:48.394293  315307 cri.go:89] found id: "f75d23b7cb5be506390d7f4feca9a0e3e3879e54a49fbdafb8b943e553e25a41"
	I1026 08:22:48.394296  315307 cri.go:89] found id: "751fff8a9191884d3d02a764ecc3f7236b66fb655f30a8849a694719e27ed4bf"
	I1026 08:22:48.394298  315307 cri.go:89] found id: "10c101b42e114919af105f062b094569f046cc071fb3afbd5891156476779ba3"
	I1026 08:22:48.394301  315307 cri.go:89] found id: "afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f"
	I1026 08:22:48.394303  315307 cri.go:89] found id: "22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc"
	I1026 08:22:48.394305  315307 cri.go:89] found id: "0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1"
	I1026 08:22:48.394307  315307 cri.go:89] found id: "2f40b18b8c4361fd526d78ba714f8768460f5f70755243014d78edc2c6db6bb6"
	I1026 08:22:48.394312  315307 cri.go:89] found id: "4ab504128977d8f1f42cf0d35014f9ec551b7e92dc3e925c725d1e3713d74d36"
	I1026 08:22:48.394315  315307 cri.go:89] found id: "2c9d2fc8979bccafc42012a2c004bd1f09dc0c92c422fbdba26cd62de1936d58"
	I1026 08:22:48.394318  315307 cri.go:89] found id: "52cd18957e7428c5b6b0cc5a8706d43572e01f330edc92755349aa308cabfcd6"
	I1026 08:22:48.394320  315307 cri.go:89] found id: "943ec97713e994e542e2145d859bd57777ed4d3cb20c51e14ed870100f8a07c4"
	I1026 08:22:48.394323  315307 cri.go:89] found id: "86023b05c0eeea3c6f8f4a8e80a4058366470808af3710f3c497033ccc0a1974"
	I1026 08:22:48.394325  315307 cri.go:89] found id: "8445fd73734067d7b40a21b28109c190d4909e54c329f23852b88a38bb721fee"
	I1026 08:22:48.394328  315307 cri.go:89] found id: ""
	I1026 08:22:48.394391  315307 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:22:48.411693  315307 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:22:48Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:22:48.411774  315307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:22:48.421354  315307 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:22:48.421363  315307 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:22:48.421444  315307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:22:48.431461  315307 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:22:48.432042  315307 kubeconfig.go:125] found "functional-622437" server: "https://192.168.49.2:8441"
	I1026 08:22:48.433936  315307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:22:48.455356  315307 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-26 08:20:50.120995289 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-26 08:22:47.446588525 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1026 08:22:48.455366  315307 kubeadm.go:1160] stopping kube-system containers ...
	I1026 08:22:48.455388  315307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 08:22:48.455450  315307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:22:48.535351  315307 cri.go:89] found id: "41e430335d33dba64cb475dcd9cd0d99a69f708349204a01927789e1755766a2"
	I1026 08:22:48.535362  315307 cri.go:89] found id: "3d4e6352151c5b9d2ff8f2880ff7a515440892a4af86fb98ad79a14a71ba94a4"
	I1026 08:22:48.535376  315307 cri.go:89] found id: "6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668"
	I1026 08:22:48.535380  315307 cri.go:89] found id: "c1cd86f7fa12090ed9e22d844994d749c7c3df5b61153b5306ec5a7c984b6d2c"
	I1026 08:22:48.535382  315307 cri.go:89] found id: "f75d23b7cb5be506390d7f4feca9a0e3e3879e54a49fbdafb8b943e553e25a41"
	I1026 08:22:48.535385  315307 cri.go:89] found id: "751fff8a9191884d3d02a764ecc3f7236b66fb655f30a8849a694719e27ed4bf"
	I1026 08:22:48.535387  315307 cri.go:89] found id: "10c101b42e114919af105f062b094569f046cc071fb3afbd5891156476779ba3"
	I1026 08:22:48.535389  315307 cri.go:89] found id: "afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f"
	I1026 08:22:48.535392  315307 cri.go:89] found id: "22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc"
	I1026 08:22:48.535410  315307 cri.go:89] found id: "0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1"
	I1026 08:22:48.535412  315307 cri.go:89] found id: "2f40b18b8c4361fd526d78ba714f8768460f5f70755243014d78edc2c6db6bb6"
	I1026 08:22:48.535417  315307 cri.go:89] found id: "4ab504128977d8f1f42cf0d35014f9ec551b7e92dc3e925c725d1e3713d74d36"
	I1026 08:22:48.535419  315307 cri.go:89] found id: "2c9d2fc8979bccafc42012a2c004bd1f09dc0c92c422fbdba26cd62de1936d58"
	I1026 08:22:48.535421  315307 cri.go:89] found id: "52cd18957e7428c5b6b0cc5a8706d43572e01f330edc92755349aa308cabfcd6"
	I1026 08:22:48.535423  315307 cri.go:89] found id: "86023b05c0eeea3c6f8f4a8e80a4058366470808af3710f3c497033ccc0a1974"
	I1026 08:22:48.535428  315307 cri.go:89] found id: "8445fd73734067d7b40a21b28109c190d4909e54c329f23852b88a38bb721fee"
	I1026 08:22:48.535430  315307 cri.go:89] found id: ""
	I1026 08:22:48.535435  315307 cri.go:252] Stopping containers: [41e430335d33dba64cb475dcd9cd0d99a69f708349204a01927789e1755766a2 3d4e6352151c5b9d2ff8f2880ff7a515440892a4af86fb98ad79a14a71ba94a4 6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668 c1cd86f7fa12090ed9e22d844994d749c7c3df5b61153b5306ec5a7c984b6d2c f75d23b7cb5be506390d7f4feca9a0e3e3879e54a49fbdafb8b943e553e25a41 751fff8a9191884d3d02a764ecc3f7236b66fb655f30a8849a694719e27ed4bf 10c101b42e114919af105f062b094569f046cc071fb3afbd5891156476779ba3 afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f 22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc 0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1 2f40b18b8c4361fd526d78ba714f8768460f5f70755243014d78edc2c6db6bb6 4ab504128977d8f1f42cf0d35014f9ec551b7e92dc3e925c725d1e3713d74d36 2c9d2fc8979bccafc42012a2c004bd1f09dc0c92c422fbdba26cd62de1936d58 52cd18957e7428c5b6b0cc5a8706d43572e01f330edc92755349aa308cabfcd6 86023b05c0eeea3c6f8f4a8e80a4058366470808a
f3710f3c497033ccc0a1974 8445fd73734067d7b40a21b28109c190d4909e54c329f23852b88a38bb721fee]
	I1026 08:22:48.535505  315307 ssh_runner.go:195] Run: which crictl
	I1026 08:22:48.545717  315307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 41e430335d33dba64cb475dcd9cd0d99a69f708349204a01927789e1755766a2 3d4e6352151c5b9d2ff8f2880ff7a515440892a4af86fb98ad79a14a71ba94a4 6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668 c1cd86f7fa12090ed9e22d844994d749c7c3df5b61153b5306ec5a7c984b6d2c f75d23b7cb5be506390d7f4feca9a0e3e3879e54a49fbdafb8b943e553e25a41 751fff8a9191884d3d02a764ecc3f7236b66fb655f30a8849a694719e27ed4bf 10c101b42e114919af105f062b094569f046cc071fb3afbd5891156476779ba3 afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f 22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc 0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1 2f40b18b8c4361fd526d78ba714f8768460f5f70755243014d78edc2c6db6bb6 4ab504128977d8f1f42cf0d35014f9ec551b7e92dc3e925c725d1e3713d74d36 2c9d2fc8979bccafc42012a2c004bd1f09dc0c92c422fbdba26cd62de1936d58 52cd18957e7428c5b6b0cc5a8706d43572e01f330edc92755349aa308cabfcd6 86023b
05c0eeea3c6f8f4a8e80a4058366470808af3710f3c497033ccc0a1974 8445fd73734067d7b40a21b28109c190d4909e54c329f23852b88a38bb721fee
	I1026 08:22:58.727171  315307 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 41e430335d33dba64cb475dcd9cd0d99a69f708349204a01927789e1755766a2 3d4e6352151c5b9d2ff8f2880ff7a515440892a4af86fb98ad79a14a71ba94a4 6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668 c1cd86f7fa12090ed9e22d844994d749c7c3df5b61153b5306ec5a7c984b6d2c f75d23b7cb5be506390d7f4feca9a0e3e3879e54a49fbdafb8b943e553e25a41 751fff8a9191884d3d02a764ecc3f7236b66fb655f30a8849a694719e27ed4bf 10c101b42e114919af105f062b094569f046cc071fb3afbd5891156476779ba3 afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f 22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc 0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1 2f40b18b8c4361fd526d78ba714f8768460f5f70755243014d78edc2c6db6bb6 4ab504128977d8f1f42cf0d35014f9ec551b7e92dc3e925c725d1e3713d74d36 2c9d2fc8979bccafc42012a2c004bd1f09dc0c92c422fbdba26cd62de1936d58 52cd18957e7428c5b6b0cc5a8706d43572e01f330edc92755349aa308cabfcd6
86023b05c0eeea3c6f8f4a8e80a4058366470808af3710f3c497033ccc0a1974 8445fd73734067d7b40a21b28109c190d4909e54c329f23852b88a38bb721fee: (10.181408092s)
	I1026 08:22:58.727238  315307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 08:22:58.821780  315307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:22:58.830015  315307 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 26 08:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 26 08:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 26 08:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 26 08:20 /etc/kubernetes/scheduler.conf
	
	I1026 08:22:58.830071  315307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1026 08:22:58.838254  315307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1026 08:22:58.846547  315307 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:22:58.846603  315307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:22:58.854439  315307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1026 08:22:58.862581  315307 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:22:58.862654  315307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:22:58.870930  315307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1026 08:22:58.878894  315307 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:22:58.878947  315307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:22:58.886476  315307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:22:58.894600  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:22:58.946465  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:23:01.541575  315307 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.595086816s)
	I1026 08:23:01.541631  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:23:01.771625  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:23:01.842426  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:23:01.914272  315307 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:23:01.914334  315307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:23:01.939933  315307 api_server.go:72] duration metric: took 25.670957ms to wait for apiserver process to appear ...
	I1026 08:23:01.939947  315307 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:23:01.939977  315307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1026 08:23:01.954126  315307 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1026 08:23:01.978951  315307 api_server.go:141] control plane version: v1.34.1
	I1026 08:23:01.978968  315307 api_server.go:131] duration metric: took 39.015522ms to wait for apiserver health ...
	I1026 08:23:01.978976  315307 cni.go:84] Creating CNI manager for ""
	I1026 08:23:01.978981  315307 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:23:01.985511  315307 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:23:01.988520  315307 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:23:01.993379  315307 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:23:01.993390  315307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:23:02.024428  315307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:23:02.525437  315307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:23:02.529332  315307 system_pods.go:59] 8 kube-system pods found
	I1026 08:23:02.529358  315307 system_pods.go:61] "coredns-66bc5c9577-crpxn" [d445f1de-0ad7-4740-8d24-a65a814db80d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:23:02.529366  315307 system_pods.go:61] "etcd-functional-622437" [db2b67bd-fabb-4976-8c5f-fffd4a8817b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:23:02.529372  315307 system_pods.go:61] "kindnet-wmh92" [f656e0a9-5e08-465a-a179-4cd771457664] Running
	I1026 08:23:02.529378  315307 system_pods.go:61] "kube-apiserver-functional-622437" [b44cf12c-7d51-45b3-a7e6-47f5dc148dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:23:02.529383  315307 system_pods.go:61] "kube-controller-manager-functional-622437" [87a33c32-440c-41fb-9059-a9750ed24a92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:23:02.529388  315307 system_pods.go:61] "kube-proxy-vsh2h" [8234436e-f974-47aa-8dfd-728b811ee52c] Running
	I1026 08:23:02.529393  315307 system_pods.go:61] "kube-scheduler-functional-622437" [8f26d7f9-2cd3-48fe-a385-c2ec9074c94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:23:02.529397  315307 system_pods.go:61] "storage-provisioner" [3323c4ea-96f8-48d6-a2a9-cf34c69c954f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:23:02.529402  315307 system_pods.go:74] duration metric: took 3.955445ms to wait for pod list to return data ...
	I1026 08:23:02.529409  315307 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:23:02.531862  315307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:23:02.531880  315307 node_conditions.go:123] node cpu capacity is 2
	I1026 08:23:02.531890  315307 node_conditions.go:105] duration metric: took 2.477934ms to run NodePressure ...
	I1026 08:23:02.531951  315307 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:23:03.295869  315307 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1026 08:23:03.304610  315307 kubeadm.go:743] kubelet initialised
	I1026 08:23:03.304621  315307 kubeadm.go:744] duration metric: took 8.739126ms waiting for restarted kubelet to initialise ...
	I1026 08:23:03.304635  315307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W1026 08:23:03.316214  315307 kubeadm.go:748] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc/3794: Is a directory
	cat: 4227/oom_adj: No such file or directory
	I1026 08:23:03.316226  315307 kubeadm.go:601] duration metric: took 14.89485879s to restartPrimaryControlPlane
	I1026 08:23:03.316236  315307 kubeadm.go:402] duration metric: took 15.004139257s to StartCluster
	I1026 08:23:03.316251  315307 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:23:03.316308  315307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:23:03.316988  315307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:23:03.317186  315307 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:23:03.317451  315307 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:23:03.317483  315307 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:23:03.317539  315307 addons.go:69] Setting storage-provisioner=true in profile "functional-622437"
	I1026 08:23:03.317553  315307 addons.go:238] Setting addon storage-provisioner=true in "functional-622437"
	W1026 08:23:03.317558  315307 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:23:03.317576  315307 host.go:66] Checking if "functional-622437" exists ...
	I1026 08:23:03.318004  315307 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
	I1026 08:23:03.318431  315307 addons.go:69] Setting default-storageclass=true in profile "functional-622437"
	I1026 08:23:03.318452  315307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-622437"
	I1026 08:23:03.318764  315307 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
	I1026 08:23:03.328992  315307 out.go:179] * Verifying Kubernetes components...
	I1026 08:23:03.332031  315307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:23:03.372070  315307 addons.go:238] Setting addon default-storageclass=true in "functional-622437"
	W1026 08:23:03.372081  315307 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:23:03.372102  315307 host.go:66] Checking if "functional-622437" exists ...
	I1026 08:23:03.372505  315307 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
	I1026 08:23:03.380212  315307 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:23:03.384897  315307 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:23:03.384909  315307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:23:03.384978  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:23:03.413263  315307 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:23:03.413282  315307 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:23:03.413345  315307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:23:03.424779  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:23:03.453623  315307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:23:03.632809  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:23:03.670611  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:23:03.693654  315307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1026 08:23:04.235038  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:persistent-volume-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:persistent-volume-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	I1026 08:23:04.235075  315307 retry.go:31] will retry after 325.578076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:persistent-volume-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:persistent-volume-provisioner": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp [::1]:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp [::1]:8441: connect: connection refused
	I1026 08:23:04.235132  315307 node_ready.go:35] waiting up to 6m0s for node "functional-622437" to be "Ready" ...
	W1026 08:23:04.235479  315307 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	I1026 08:23:04.560855  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:04.698591  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:04.698613  315307 retry.go:31] will retry after 376.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:05.076341  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:05.206723  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:05.206747  315307 retry.go:31] will retry after 444.330964ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:05.652074  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:05.771901  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:05.771921  315307 retry.go:31] will retry after 1.035763882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:23:06.235622  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 08:23:06.808415  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:06.924788  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:06.924810  315307 retry.go:31] will retry after 1.402247768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:08.327970  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:08.388414  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:08.388436  315307 retry.go:31] will retry after 2.738092787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:23:08.736306  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 08:23:11.127342  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:11.198975  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:11.198998  315307 retry.go:31] will retry after 4.071063766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:23:11.236665  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	W1026 08:23:13.735776  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 08:23:15.271033  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:15.336470  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:15.336490  315307 retry.go:31] will retry after 3.897719363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:23:15.736455  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	W1026 08:23:18.235777  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 08:23:19.234767  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 08:23:19.298852  315307 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 08:23:19.298873  315307 retry.go:31] will retry after 5.282728804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 08:23:20.735736  315307 node_ready.go:55] error getting node "functional-622437" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-622437": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 08:23:24.582511  315307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:23:24.753167  315307 node_ready.go:49] node "functional-622437" is "Ready"
	I1026 08:23:24.753184  315307 node_ready.go:38] duration metric: took 20.518040051s for node "functional-622437" to be "Ready" ...
	I1026 08:23:24.753196  315307 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:23:24.753253  315307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:23:25.466603  315307 api_server.go:72] duration metric: took 22.149392137s to wait for apiserver process to appear ...
	I1026 08:23:25.466614  315307 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:23:25.466629  315307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1026 08:23:25.469616  315307 out.go:179] * Enabled addons: storage-provisioner
	I1026 08:23:25.472604  315307 addons.go:514] duration metric: took 22.155097752s for enable addons: enabled=[storage-provisioner]
	I1026 08:23:25.475422  315307 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:23:25.475437  315307 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:23:25.966974  315307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1026 08:23:25.976414  315307 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1026 08:23:25.977438  315307 api_server.go:141] control plane version: v1.34.1
	I1026 08:23:25.977450  315307 api_server.go:131] duration metric: took 510.831468ms to wait for apiserver health ...
	I1026 08:23:25.977457  315307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:23:25.980840  315307 system_pods.go:59] 8 kube-system pods found
	I1026 08:23:25.980854  315307 system_pods.go:61] "coredns-66bc5c9577-crpxn" [d445f1de-0ad7-4740-8d24-a65a814db80d] Running
	I1026 08:23:25.980858  315307 system_pods.go:61] "etcd-functional-622437" [db2b67bd-fabb-4976-8c5f-fffd4a8817b6] Running
	I1026 08:23:25.980861  315307 system_pods.go:61] "kindnet-wmh92" [f656e0a9-5e08-465a-a179-4cd771457664] Running
	I1026 08:23:25.980868  315307 system_pods.go:61] "kube-apiserver-functional-622437" [4086d149-9be6-464a-86f0-5b641b03dd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:23:25.980873  315307 system_pods.go:61] "kube-controller-manager-functional-622437" [87a33c32-440c-41fb-9059-a9750ed24a92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:23:25.980879  315307 system_pods.go:61] "kube-proxy-vsh2h" [8234436e-f974-47aa-8dfd-728b811ee52c] Running
	I1026 08:23:25.980884  315307 system_pods.go:61] "kube-scheduler-functional-622437" [8f26d7f9-2cd3-48fe-a385-c2ec9074c94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:23:25.980890  315307 system_pods.go:61] "storage-provisioner" [3323c4ea-96f8-48d6-a2a9-cf34c69c954f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:23:25.980894  315307 system_pods.go:74] duration metric: took 3.432393ms to wait for pod list to return data ...
	I1026 08:23:25.980901  315307 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:23:25.983726  315307 default_sa.go:45] found service account: "default"
	I1026 08:23:25.983739  315307 default_sa.go:55] duration metric: took 2.83286ms for default service account to be created ...
	I1026 08:23:25.983747  315307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:23:25.986581  315307 system_pods.go:86] 8 kube-system pods found
	I1026 08:23:25.986595  315307 system_pods.go:89] "coredns-66bc5c9577-crpxn" [d445f1de-0ad7-4740-8d24-a65a814db80d] Running
	I1026 08:23:25.986600  315307 system_pods.go:89] "etcd-functional-622437" [db2b67bd-fabb-4976-8c5f-fffd4a8817b6] Running
	I1026 08:23:25.986603  315307 system_pods.go:89] "kindnet-wmh92" [f656e0a9-5e08-465a-a179-4cd771457664] Running
	I1026 08:23:25.986610  315307 system_pods.go:89] "kube-apiserver-functional-622437" [4086d149-9be6-464a-86f0-5b641b03dd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:23:25.986617  315307 system_pods.go:89] "kube-controller-manager-functional-622437" [87a33c32-440c-41fb-9059-a9750ed24a92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:23:25.986621  315307 system_pods.go:89] "kube-proxy-vsh2h" [8234436e-f974-47aa-8dfd-728b811ee52c] Running
	I1026 08:23:25.986626  315307 system_pods.go:89] "kube-scheduler-functional-622437" [8f26d7f9-2cd3-48fe-a385-c2ec9074c94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:23:25.986632  315307 system_pods.go:89] "storage-provisioner" [3323c4ea-96f8-48d6-a2a9-cf34c69c954f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:23:25.986638  315307 system_pods.go:126] duration metric: took 2.887549ms to wait for k8s-apps to be running ...
	I1026 08:23:25.986645  315307 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:23:25.986702  315307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:23:25.999875  315307 system_svc.go:56] duration metric: took 13.219396ms WaitForService to wait for kubelet
	I1026 08:23:25.999892  315307 kubeadm.go:586] duration metric: took 22.682687004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:23:25.999910  315307 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:23:26.003087  315307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:23:26.003102  315307 node_conditions.go:123] node cpu capacity is 2
	I1026 08:23:26.003111  315307 node_conditions.go:105] duration metric: took 3.196591ms to run NodePressure ...
	I1026 08:23:26.003122  315307 start.go:241] waiting for startup goroutines ...
	I1026 08:23:26.003129  315307 start.go:246] waiting for cluster config update ...
	I1026 08:23:26.003140  315307 start.go:255] writing updated cluster config ...
	I1026 08:23:26.003430  315307 ssh_runner.go:195] Run: rm -f paused
	I1026 08:23:26.008400  315307 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:23:26.013177  315307 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-crpxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:26.018882  315307 pod_ready.go:94] pod "coredns-66bc5c9577-crpxn" is "Ready"
	I1026 08:23:26.018896  315307 pod_ready.go:86] duration metric: took 5.705566ms for pod "coredns-66bc5c9577-crpxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:26.021677  315307 pod_ready.go:83] waiting for pod "etcd-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:26.027155  315307 pod_ready.go:94] pod "etcd-functional-622437" is "Ready"
	I1026 08:23:26.027170  315307 pod_ready.go:86] duration metric: took 5.478215ms for pod "etcd-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:26.030087  315307 pod_ready.go:83] waiting for pod "kube-apiserver-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:23:28.036462  315307 pod_ready.go:104] pod "kube-apiserver-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:30.040485  315307 pod_ready.go:104] pod "kube-apiserver-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:32.535880  315307 pod_ready.go:104] pod "kube-apiserver-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:35.036444  315307 pod_ready.go:104] pod "kube-apiserver-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:37.038341  315307 pod_ready.go:104] pod "kube-apiserver-functional-622437" is not "Ready", error: <nil>
	I1026 08:23:39.036273  315307 pod_ready.go:94] pod "kube-apiserver-functional-622437" is "Ready"
	I1026 08:23:39.036288  315307 pod_ready.go:86] duration metric: took 13.006186784s for pod "kube-apiserver-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:39.039189  315307 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:23:41.044514  315307 pod_ready.go:104] pod "kube-controller-manager-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:43.045133  315307 pod_ready.go:104] pod "kube-controller-manager-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:45.047402  315307 pod_ready.go:104] pod "kube-controller-manager-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:47.544882  315307 pod_ready.go:104] pod "kube-controller-manager-functional-622437" is not "Ready", error: <nil>
	W1026 08:23:49.545033  315307 pod_ready.go:104] pod "kube-controller-manager-functional-622437" is not "Ready", error: <nil>
	I1026 08:23:50.047620  315307 pod_ready.go:94] pod "kube-controller-manager-functional-622437" is "Ready"
	I1026 08:23:50.047636  315307 pod_ready.go:86] duration metric: took 11.008435338s for pod "kube-controller-manager-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:50.050382  315307 pod_ready.go:83] waiting for pod "kube-proxy-vsh2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:50.055487  315307 pod_ready.go:94] pod "kube-proxy-vsh2h" is "Ready"
	I1026 08:23:50.055502  315307 pod_ready.go:86] duration metric: took 5.107035ms for pod "kube-proxy-vsh2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:50.058038  315307 pod_ready.go:83] waiting for pod "kube-scheduler-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:50.063580  315307 pod_ready.go:94] pod "kube-scheduler-functional-622437" is "Ready"
	I1026 08:23:50.063605  315307 pod_ready.go:86] duration metric: took 5.554024ms for pod "kube-scheduler-functional-622437" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:23:50.063616  315307 pod_ready.go:40] duration metric: took 24.055109558s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:23:50.123530  315307 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 08:23:50.126491  315307 out.go:179] * Done! kubectl is now configured to use "functional-622437" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 08:24:19 functional-622437 crio[3489]: time="2025-10-26T08:24:19.246772953Z" level=info msg="Created container 1fdcd888e39349940a64bd175371c5622a0793895223b27fabddfb441a7b9fc4: default/sp-pod/myfrontend" id=c7851a2c-1947-4a0a-b872-6bccee5f4cd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:24:19 functional-622437 crio[3489]: time="2025-10-26T08:24:19.247554962Z" level=info msg="Starting container: 1fdcd888e39349940a64bd175371c5622a0793895223b27fabddfb441a7b9fc4" id=a20903cf-baf8-48f1-aac3-8dcc444bd376 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:24:19 functional-622437 crio[3489]: time="2025-10-26T08:24:19.249199712Z" level=info msg="Started container" PID=5520 containerID=1fdcd888e39349940a64bd175371c5622a0793895223b27fabddfb441a7b9fc4 description=default/sp-pod/myfrontend id=a20903cf-baf8-48f1-aac3-8dcc444bd376 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08c2b724942c631b164ce83c54f99b21e06916ab5824e6a1b646b06aa0404c55
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.114195231Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-r6lmg/POD" id=1becb0df-4ade-44a9-8b6a-4f99ea2e11fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.114286629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.130109546Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-r6lmg Namespace:default ID:e8ab1d6d76d94b0a7350511146756d5f706de83fff577d7686765e1797e8ce61 UID:f7885d94-b882-4c56-8ff9-4c80a2425de3 NetNS:/var/run/netns/a4017794-f9ff-4bea-94a3-cc1d201b0d2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400109d928}] Aliases:map[]}"
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.130155553Z" level=info msg="Adding pod default_hello-node-75c85bcc94-r6lmg to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.141618676Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-r6lmg Namespace:default ID:e8ab1d6d76d94b0a7350511146756d5f706de83fff577d7686765e1797e8ce61 UID:f7885d94-b882-4c56-8ff9-4c80a2425de3 NetNS:/var/run/netns/a4017794-f9ff-4bea-94a3-cc1d201b0d2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400109d928}] Aliases:map[]}"
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.142066863Z" level=info msg="Checking pod default_hello-node-75c85bcc94-r6lmg for CNI network kindnet (type=ptp)"
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.146261222Z" level=info msg="Ran pod sandbox e8ab1d6d76d94b0a7350511146756d5f706de83fff577d7686765e1797e8ce61 with infra container: default/hello-node-75c85bcc94-r6lmg/POD" id=1becb0df-4ade-44a9-8b6a-4f99ea2e11fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.148847803Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=13e3144d-404c-47aa-be78-b1d362ef37c2 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:24:26 functional-622437 crio[3489]: time="2025-10-26T08:24:26.949999674Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b0d8b08a-fe9a-4a53-8499-6d54df2d3876 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:24:38 functional-622437 crio[3489]: time="2025-10-26T08:24:38.949968852Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b682abde-5df7-4def-adb5-34b49c711b74 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:24:53 functional-622437 crio[3489]: time="2025-10-26T08:24:53.950228415Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1a8dd9f4-83ac-49b4-a674-43fc405bceff name=/runtime.v1.ImageService/PullImage
	Oct 26 08:25:02 functional-622437 crio[3489]: time="2025-10-26T08:25:02.071126758Z" level=info msg="Stopping pod sandbox: f1d3df803753af90fc5a637ebf0da0fd2d46a1c1d03ac367b1ad59bc2fb09d44" id=01eca79b-413e-4ede-819c-632c5b85c201 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 08:25:02 functional-622437 crio[3489]: time="2025-10-26T08:25:02.071196855Z" level=info msg="Stopped pod sandbox (already stopped): f1d3df803753af90fc5a637ebf0da0fd2d46a1c1d03ac367b1ad59bc2fb09d44" id=01eca79b-413e-4ede-819c-632c5b85c201 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 08:25:02 functional-622437 crio[3489]: time="2025-10-26T08:25:02.071648981Z" level=info msg="Removing pod sandbox: f1d3df803753af90fc5a637ebf0da0fd2d46a1c1d03ac367b1ad59bc2fb09d44" id=8279f28d-3a69-404f-88e6-61d0b0614665 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 08:25:02 functional-622437 crio[3489]: time="2025-10-26T08:25:02.075564444Z" level=info msg="Removed pod sandbox: f1d3df803753af90fc5a637ebf0da0fd2d46a1c1d03ac367b1ad59bc2fb09d44" id=8279f28d-3a69-404f-88e6-61d0b0614665 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 08:25:04 functional-622437 crio[3489]: time="2025-10-26T08:25:04.949704968Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fa9c099e-ee1b-45a6-a641-29159e3b85e5 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:25:42 functional-622437 crio[3489]: time="2025-10-26T08:25:42.950227145Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=da494708-6a37-43e3-81b9-a3f670cbbb5b name=/runtime.v1.ImageService/PullImage
	Oct 26 08:25:53 functional-622437 crio[3489]: time="2025-10-26T08:25:53.949571513Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e5b218c2-f9f3-45e2-9acc-3bd77c6f2ca9 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:27:04 functional-622437 crio[3489]: time="2025-10-26T08:27:04.949864893Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0ea8e184-8051-470d-93b8-44be3ed6d496 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:27:16 functional-622437 crio[3489]: time="2025-10-26T08:27:16.949278359Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=abc3dd45-7dc0-4e6f-b1db-d339e5cc6b35 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:29:53 functional-622437 crio[3489]: time="2025-10-26T08:29:53.949244604Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d25d2e79-ef28-40da-adf6-816eeb36c4f5 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:30:07 functional-622437 crio[3489]: time="2025-10-26T08:30:07.950460266Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3046a010-f530-4a61-a5cd-88d3b0053bcf name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1fdcd888e3934       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f   9 minutes ago       Running             myfrontend                0                   08c2b724942c6       sp-pod                                      default
	0eeead6af8b5c       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   9be7f769f4636       nginx-svc                                   default
	792222566dff7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       5                   b83a2085ace64       storage-provisioner                         kube-system
	57e771500e2d4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   4                   c48ba5e70d3bd       kube-controller-manager-functional-622437   kube-system
	e7222af6e5686       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            2                   cdca56babe94f       kube-apiserver-functional-622437            kube-system
	56032d686f162       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       4                   b83a2085ace64       storage-provisioner                         kube-system
	df2989f60525d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Running             coredns                   3                   58384a101bccf       coredns-66bc5c9577-crpxn                    kube-system
	22db50e54e5c7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  11 minutes ago      Exited              kube-apiserver            1                   cdca56babe94f       kube-apiserver-functional-622437            kube-system
	f8d6ebf62d31f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   3                   c48ba5e70d3bd       kube-controller-manager-functional-622437   kube-system
	6636f2caf6035       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Running             kube-scheduler            3                   7578b0f7e5013       kube-scheduler-functional-622437            kube-system
	29dfa5a10e047       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Running             kindnet-cni               2                   1d7abaf6f7cc7       kindnet-wmh92                               kube-system
	ef6c571a56c3d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Running             etcd                      2                   0e22c36b9b3a0       etcd-functional-622437                      kube-system
	deff2a7e1c567       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Running             kube-proxy                2                   ea9f0a88775fe       kube-proxy-vsh2h                            kube-system
	41e430335d33d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   7578b0f7e5013       kube-scheduler-functional-622437            kube-system
	6d62c7a59be9b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   58384a101bccf       coredns-66bc5c9577-crpxn                    kube-system
	afe01bb18b40f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  12 minutes ago      Exited              kube-proxy                1                   ea9f0a88775fe       kube-proxy-vsh2h                            kube-system
	22cefc9323995       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  12 minutes ago      Exited              kindnet-cni               1                   1d7abaf6f7cc7       kindnet-wmh92                               kube-system
	0c8462446d3d5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  12 minutes ago      Exited              etcd                      1                   0e22c36b9b3a0       etcd-functional-622437                      kube-system
	
	
	==> coredns [6d62c7a59be9bc1fdc1cb044d836bfb04617035498d888511d91b415f0d3e668] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34053 - 23839 "HINFO IN 6602068590832188322.8309532163744663952. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007398692s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df2989f60525d4978e6f45aa7e2ac3b28959ccf1d79fa7fdafaa883f63fbe466] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50737 - 55039 "HINFO IN 1597201754285410568.470152780781814194. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029946103s
	
	
	==> describe nodes <==
	Name:               functional-622437
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-622437
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=functional-622437
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_21_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-622437
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:31:34 +0000   Sun, 26 Oct 2025 08:20:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:31:34 +0000   Sun, 26 Oct 2025 08:20:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:31:34 +0000   Sun, 26 Oct 2025 08:20:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:31:34 +0000   Sun, 26 Oct 2025 08:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-622437
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0113a200-0e3e-4547-99e1-2c8ef30ee28a
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-r6lmg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-cq7c7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-crpxn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-622437                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-wmh92                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-622437             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-622437    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vsh2h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-622437             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-622437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-622437 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-622437 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-622437 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-622437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-622437 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           13m                node-controller  Node functional-622437 event: Registered Node functional-622437 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-622437 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-622437 event: Registered Node functional-622437 in Controller
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-622437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-622437 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-622437 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-622437 event: Registered Node functional-622437 in Controller
	
	
	==> dmesg <==
	[Oct26 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014214] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501900] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033459] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.999923] kauditd_printk_skb: 36 callbacks suppressed
	[Oct26 08:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 08:14] overlayfs: idmapped layers are currently not supported
	[  +0.063904] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 08:20] overlayfs: idmapped layers are currently not supported
	[ +54.744422] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c8462446d3d590825df4083e12ef184d7741e16478f3eb37ea6c870607302b1] <==
	{"level":"warn","ts":"2025-10-26T08:22:08.674918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.688222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.706459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.731447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.753550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.766299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:22:08.889006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:22:32.331581Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T08:22:32.331637Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-622437","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T08:22:32.331738Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T08:22:32.481486Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T08:22:32.481688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T08:22:32.481753Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-26T08:22:32.481765Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T08:22:32.481817Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T08:22:32.481827Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T08:22:32.481840Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T08:22:32.481864Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T08:22:32.481957Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T08:22:32.481996Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T08:22:32.482031Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T08:22:32.485861Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T08:22:32.485952Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T08:22:32.485996Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T08:22:32.486023Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-622437","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ef6c571a56c3d472274b0f4bc42829456c35a76938b03bcfb333c16086b33cf9] <==
	{"level":"warn","ts":"2025-10-26T08:23:23.586472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.606996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.622918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.633551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.647713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.662852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.677950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.693314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.708222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.723460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.738481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.763628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.767944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.784041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.800941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.813692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.829445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.842872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.878972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.887900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.902097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:23:23.984056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:33:22.936366Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1240}
	{"level":"info","ts":"2025-10-26T08:33:22.959462Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1240,"took":"22.706458ms","hash":3676011408,"current-db-size-bytes":3522560,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-26T08:33:22.959514Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3676011408,"revision":1240,"compact-revision":-1}
	
	
	==> kernel <==
	 08:34:10 up  2:16,  0 user,  load average: 0.22, 0.37, 1.47
	Linux functional-622437 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22cefc932399552bb41a45c52d6f0807d4623235fb64dd56993f12d2c94a6edc] <==
	I1026 08:22:05.481460       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:22:05.485165       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 08:22:05.485307       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:22:05.485319       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:22:05.485332       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:22:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:22:05.788019       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:22:05.788044       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:22:05.788055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:22:05.788165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:22:09.990880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:22:09.990918       1 metrics.go:72] Registering metrics
	I1026 08:22:09.990975       1 controller.go:711] "Syncing nftables rules"
	I1026 08:22:15.786331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:22:15.786392       1 main.go:301] handling current node
	I1026 08:22:25.781583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:22:25.781640       1 main.go:301] handling current node
	
	
	==> kindnet [29dfa5a10e0479f030172421be3b8d193aab6ce8b0f66deb51eed86048d48efa] <==
	I1026 08:32:08.938833       1 main.go:301] handling current node
	I1026 08:32:18.931207       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:32:18.931243       1 main.go:301] handling current node
	I1026 08:32:28.933940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:32:28.933978       1 main.go:301] handling current node
	I1026 08:32:38.930847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:32:38.930883       1 main.go:301] handling current node
	I1026 08:32:48.931056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:32:48.931167       1 main.go:301] handling current node
	I1026 08:32:58.930838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:32:58.930873       1 main.go:301] handling current node
	I1026 08:33:08.931663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:08.931719       1 main.go:301] handling current node
	I1026 08:33:18.930852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:18.930886       1 main.go:301] handling current node
	I1026 08:33:28.931530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:28.931583       1 main.go:301] handling current node
	I1026 08:33:38.930998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:38.931034       1 main.go:301] handling current node
	I1026 08:33:48.930950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:48.930981       1 main.go:301] handling current node
	I1026 08:33:58.931270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:33:58.931312       1 main.go:301] handling current node
	I1026 08:34:08.939196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:34:08.939308       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22db50e54e5c77355e07ad2dfc37129895b40c0249fe2ffd5b91b069b19680e7] <==
	I1026 08:23:03.571105       1 options.go:263] external host was not specified, using 192.168.49.2
	I1026 08:23:03.576221       1 server.go:150] Version: v1.34.1
	I1026 08:23:03.576351       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1026 08:23:03.576732       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [e7222af6e56868eb313a0097e56473534937d7da3edfb0856a4744466e74b493] <==
	I1026 08:23:24.816589       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 08:23:24.818441       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:23:24.819629       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:23:24.819649       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:23:24.824914       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:23:24.824979       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:23:24.839171       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:23:24.839240       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:23:24.876672       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:23:24.899006       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:23:24.909816       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:23:25.521860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 08:23:25.833379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 08:23:25.834905       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:23:25.840441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:23:39.282556       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:23:40.718149       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:23:53.556515       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.229.113"}
	I1026 08:23:59.641046       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.237.220"}
	I1026 08:24:08.234831       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:24:08.384680       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.59.54"}
	E1026 08:24:18.379187       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1026 08:24:25.672855       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55050: use of closed network connection
	I1026 08:24:25.896012       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.219.27"}
	I1026 08:33:24.800652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [57e771500e2d489efb89761e4d4edb487e62c992c6fc311b9a10effde3f177bd] <==
	I1026 08:23:40.622879       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:23:40.633287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:23:40.635565       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 08:23:40.638803       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:23:40.639026       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:23:40.639122       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-622437"
	I1026 08:23:40.639170       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:23:40.640982       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:23:40.641082       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:23:40.643575       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:23:40.646997       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:23:40.655342       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:23:40.659141       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 08:23:40.659192       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:23:40.659604       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:23:40.659677       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:23:40.660299       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:23:40.660399       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:23:40.672727       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:23:40.672740       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:23:40.676811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:23:40.676837       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:23:40.676846       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:23:40.679101       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:23:40.689104       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [f8d6ebf62d31f4cca90fe3f12948f33781cec295675e0816f30784922e1020bd] <==
	I1026 08:23:06.152311       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:23:07.530227       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1026 08:23:07.530260       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:23:07.532663       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1026 08:23:07.533284       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 08:23:07.533415       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:23:07.533500       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1026 08:23:17.540475       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [afe01bb18b40f25492b459bbfe334c3b5ee3c9ad508ab2ed04c5d6f102cd584f] <==
	I1026 08:22:07.245670       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:22:08.375895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:22:10.077141       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:22:10.077280       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:22:10.077394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:22:10.216477       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:22:10.216600       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:22:10.281362       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:22:10.281785       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:22:10.282035       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:22:10.287319       1 config.go:200] "Starting service config controller"
	I1026 08:22:10.302827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:22:10.302894       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:22:10.302901       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:22:10.302935       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:22:10.302946       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:22:10.303883       1 config.go:309] "Starting node config controller"
	I1026 08:22:10.303900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:22:10.303918       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:22:10.403985       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:22:10.404199       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:22:10.404259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [deff2a7e1c56712872059b33eafb6c7a8d7b294210967ca927e9c84973f1a228] <==
	I1026 08:22:51.245502       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:22:51.887723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1026 08:22:52.407068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-622437\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1026 08:22:54.075764       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:22:54.075809       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:22:54.075878       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:22:54.097476       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:22:54.097545       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:22:54.102057       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:22:54.102416       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:22:54.102437       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:22:54.105758       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:22:54.105841       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:22:54.106163       1 config.go:200] "Starting service config controller"
	I1026 08:22:54.106209       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:22:54.106572       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:22:54.112132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:22:54.112218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:22:54.107023       1 config.go:309] "Starting node config controller"
	I1026 08:22:54.112273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:22:54.112300       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:22:54.206669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:22:54.206692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [41e430335d33dba64cb475dcd9cd0d99a69f708349204a01927789e1755766a2] <==
	I1026 08:22:51.712985       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:22:53.430487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:22:53.434126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1026 08:22:53.434272       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1026 08:22:53.441907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1026 08:22:53.442061       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1026 08:22:53.442204       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 08:22:53.442258       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	E1026 08:22:53.442300       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="RequestHeaderAuthRequestController"
	I1026 08:22:53.442372       1 requestheader_controller.go:187] Shutting down RequestHeaderAuthRequestController
	I1026 08:22:53.442416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:22:53.442467       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 08:22:53.442597       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 08:22:53.442651       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 08:22:53.442678       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 08:22:53.442783       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6636f2caf60359c86ad8c57dce07ed344d1182b8ef268171e2e73793767873f7] <==
	E1026 08:23:13.600289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:23:13.783407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:23:13.937629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:23:14.579800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:23:14.635526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:23:15.298347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:23:15.429573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:23:15.549595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:23:15.916854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:23:15.920542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:23:16.007583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:23:16.288661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:23:16.551009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:23:16.574018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:23:16.629027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:23:17.387548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:23:17.514563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:23:17.523193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:23:17.564952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:23:21.022121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:23:24.674035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:23:24.674215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:23:24.674311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:23:24.674408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1026 08:23:25.076301       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:28 functional-622437 kubelet[4126]: E1026 08:31:28.949455    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:31:40 functional-622437 kubelet[4126]: E1026 08:31:40.949372    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:31:42 functional-622437 kubelet[4126]: E1026 08:31:42.948584    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:31:52 functional-622437 kubelet[4126]: E1026 08:31:52.948619    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:31:53 functional-622437 kubelet[4126]: E1026 08:31:53.948846    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:32:06 functional-622437 kubelet[4126]: E1026 08:32:06.949157    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:32:06 functional-622437 kubelet[4126]: E1026 08:32:06.949224    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:32:17 functional-622437 kubelet[4126]: E1026 08:32:17.948988    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:32:18 functional-622437 kubelet[4126]: E1026 08:32:18.949448    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:32:30 functional-622437 kubelet[4126]: E1026 08:32:30.949213    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:32:32 functional-622437 kubelet[4126]: E1026 08:32:32.948606    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:32:43 functional-622437 kubelet[4126]: E1026 08:32:43.949348    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:32:46 functional-622437 kubelet[4126]: E1026 08:32:46.949156    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:32:54 functional-622437 kubelet[4126]: E1026 08:32:54.949026    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:32:59 functional-622437 kubelet[4126]: E1026 08:32:59.949106    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:33:08 functional-622437 kubelet[4126]: E1026 08:33:08.948902    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:33:11 functional-622437 kubelet[4126]: E1026 08:33:11.951361    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:33:21 functional-622437 kubelet[4126]: E1026 08:33:21.949567    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:33:23 functional-622437 kubelet[4126]: E1026 08:33:23.949563    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:33:34 functional-622437 kubelet[4126]: E1026 08:33:34.949572    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:33:36 functional-622437 kubelet[4126]: E1026 08:33:36.949367    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:33:47 functional-622437 kubelet[4126]: E1026 08:33:47.948978    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:33:50 functional-622437 kubelet[4126]: E1026 08:33:50.949113    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	Oct 26 08:33:58 functional-622437 kubelet[4126]: E1026 08:33:58.949144    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r6lmg" podUID="f7885d94-b882-4c56-8ff9-4c80a2425de3"
	Oct 26 08:34:02 functional-622437 kubelet[4126]: E1026 08:34:02.949346    4126 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cq7c7" podUID="95744394-f3de-4cdc-8398-e873ed7d02e9"
	
	
	==> storage-provisioner [56032d686f162a8251bfb4d3207580119106cdd580481590a12623cfd3fcd730] <==
	I1026 08:23:17.990370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:23:17.991775       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [792222566dff702785856d628f134f983f33371c199543e7e5b24df8ac364b65] <==
	W1026 08:33:45.217938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:47.221252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:47.227975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:49.231200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:49.238294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:51.241560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:51.246226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:53.249558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:53.254087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:55.257515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:55.264249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:57.267748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:57.272017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:59.275257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:59.281717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:01.285795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:01.290370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:03.293799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:03.298133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:05.301246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:05.307735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:07.310492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:07.314892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:09.317917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:34:09.323984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-622437 -n functional-622437
helpers_test.go:269: (dbg) Run:  kubectl --context functional-622437 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-r6lmg hello-node-connect-7d85dfc575-cq7c7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-622437 describe pod hello-node-75c85bcc94-r6lmg hello-node-connect-7d85dfc575-cq7c7
helpers_test.go:290: (dbg) kubectl --context functional-622437 describe pod hello-node-75c85bcc94-r6lmg hello-node-connect-7d85dfc575-cq7c7:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-r6lmg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-622437/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 08:24:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlmjs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xlmjs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r6lmg to functional-622437
	  Normal   Pulling    6m55s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m55s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m55s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m39s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m39s (x21 over 9m45s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-cq7c7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-622437/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 08:24:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-444kn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-444kn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cq7c7 to functional-622437
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-622437 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-622437 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-r6lmg" [f7885d94-b882-4c56-8ff9-4c80a2425de3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1026 08:26:13.919552  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:26:41.625839  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:31:13.919422  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-622437 -n functional-622437
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-26 08:34:26.379827944 +0000 UTC m=+1277.773315002
functional_test.go:1460: (dbg) Run:  kubectl --context functional-622437 describe po hello-node-75c85bcc94-r6lmg -n default
functional_test.go:1460: (dbg) kubectl --context functional-622437 describe po hello-node-75c85bcc94-r6lmg -n default:
Name:             hello-node-75c85bcc94-r6lmg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622437/192.168.49.2
Start Time:       Sun, 26 Oct 2025 08:24:25 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlmjs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xlmjs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r6lmg to functional-622437
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-622437 logs hello-node-75c85bcc94-r6lmg -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-622437 logs hello-node-75c85bcc94-r6lmg -n default: exit status 1 (125.239558ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-r6lmg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-622437 logs hello-node-75c85bcc94-r6lmg -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 service --namespace=default --https --url hello-node: exit status 115 (575.572439ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31536
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-622437 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 service hello-node --url --format={{.IP}}: exit status 115 (412.204173ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-622437 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 service hello-node --url: exit status 115 (441.825617ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31536
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-622437 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31536
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image load --daemon kicbase/echo-server:functional-622437 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 image load --daemon kicbase/echo-server:functional-622437 --alsologtostderr: (1.048751674s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622437" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image load --daemon kicbase/echo-server:functional-622437 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622437" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-622437
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image load --daemon kicbase/echo-server:functional-622437 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622437" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image save kicbase/echo-server:functional-622437 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1026 08:34:39.533708  323427 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:34:39.533946  323427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:39.533980  323427 out.go:374] Setting ErrFile to fd 2...
	I1026 08:34:39.534000  323427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:39.534288  323427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:34:39.535022  323427 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:34:39.535208  323427 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:34:39.535717  323427 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
	I1026 08:34:39.553854  323427 ssh_runner.go:195] Run: systemctl --version
	I1026 08:34:39.553918  323427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
	I1026 08:34:39.572679  323427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
	I1026 08:34:39.681599  323427 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1026 08:34:39.681667  323427 cache_images.go:254] Failed to load cached images for "functional-622437": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1026 08:34:39.681692  323427 cache_images.go:266] failed pushing to: functional-622437

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-622437
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image save --daemon kicbase/echo-server:functional-622437 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-622437
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-622437: exit status 1 (20.624829ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-622437

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-622437

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (426.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 stop --alsologtostderr -v 5: (26.60120362s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 start --wait true --alsologtostderr -v 5
E1026 08:43:59.194507  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:44:26.894875  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:46:13.919147  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:48:59.192817  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-232402 start --wait true --alsologtostderr -v 5: exit status 80 (6m37.045259377s)

                                                
                                                
-- stdout --
	* [ha-232402] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-232402" primary control-plane node in "ha-232402" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-232402-m02" control-plane node in "ha-232402" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-232402-m03" control-plane node in "ha-232402" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-232402-m04" worker node in "ha-232402" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:42:48.917934  342550 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:42:48.918170  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918204  342550 out.go:374] Setting ErrFile to fd 2...
	I1026 08:42:48.918225  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918525  342550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:42:48.918983  342550 out.go:368] Setting JSON to false
	I1026 08:42:48.919916  342550 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8719,"bootTime":1761459450,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:42:48.920018  342550 start.go:141] virtualization:  
	I1026 08:42:48.923144  342550 out.go:179] * [ha-232402] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:42:48.927011  342550 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:42:48.927093  342550 notify.go:220] Checking for updates...
	I1026 08:42:48.933001  342550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:42:48.935959  342550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:48.939045  342550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:42:48.941971  342550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:42:48.944900  342550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:42:48.948888  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:48.948992  342550 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:42:48.982651  342550 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:42:48.982836  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.052116  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.041304773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.052225  342550 docker.go:318] overlay module found
	I1026 08:42:49.055376  342550 out.go:179] * Using the docker driver based on existing profile
	I1026 08:42:49.058272  342550 start.go:305] selected driver: docker
	I1026 08:42:49.058291  342550 start.go:925] validating driver "docker" against &{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.058453  342550 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:42:49.058555  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.113827  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.10402828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.114262  342550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:42:49.114296  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:49.114371  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:49.114423  342550 start.go:349] cluster config:
	{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.119386  342550 out.go:179] * Starting "ha-232402" primary control-plane node in "ha-232402" cluster
	I1026 08:42:49.122223  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:49.125135  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:49.127883  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:49.127936  342550 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:42:49.127964  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:49.127976  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:49.128054  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:49.128065  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:49.128205  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.148213  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:49.148234  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:49.148247  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:49.148277  342550 start.go:360] acquireMachinesLock for ha-232402: {Name:mkd235a265416fa355dec74b5ac56d04d491256e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:49.148333  342550 start.go:364] duration metric: took 39.081µs to acquireMachinesLock for "ha-232402"
	I1026 08:42:49.148353  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:49.148358  342550 fix.go:54] fixHost starting: 
	I1026 08:42:49.148604  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.166112  342550 fix.go:112] recreateIfNeeded on ha-232402: state=Stopped err=<nil>
	W1026 08:42:49.166154  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:49.169342  342550 out.go:252] * Restarting existing docker container for "ha-232402" ...
	I1026 08:42:49.169424  342550 cli_runner.go:164] Run: docker start ha-232402
	I1026 08:42:49.418525  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.441545  342550 kic.go:430] container "ha-232402" state is running.
	I1026 08:42:49.441931  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:49.465537  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.465781  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:49.465856  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:49.483751  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:49.484066  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:49.484076  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:49.484629  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55242->127.0.0.1:33180: read: connection reset by peer
	I1026 08:42:52.642170  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.642200  342550 ubuntu.go:182] provisioning hostname "ha-232402"
	I1026 08:42:52.642273  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.660229  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.660550  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.660567  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402 && echo "ha-232402" | sudo tee /etc/hostname
	I1026 08:42:52.820313  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.820402  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.840800  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.841134  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.841160  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:42:52.990861  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:42:52.990892  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:42:52.990914  342550 ubuntu.go:190] setting up certificates
	I1026 08:42:52.990924  342550 provision.go:84] configureAuth start
	I1026 08:42:52.990990  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:53.009824  342550 provision.go:143] copyHostCerts
	I1026 08:42:53.009871  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.009906  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:42:53.009927  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.010020  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:42:53.010118  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010140  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:42:53.010145  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010179  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:42:53.010234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010255  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:42:53.010265  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010300  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:42:53.010365  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402 san=[127.0.0.1 192.168.49.2 ha-232402 localhost minikube]
	I1026 08:42:54.039767  342550 provision.go:177] copyRemoteCerts
	I1026 08:42:54.039841  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:42:54.039881  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.058074  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.162887  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:42:54.162960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1026 08:42:54.182166  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:42:54.182225  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:42:54.200141  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:42:54.200208  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:42:54.218057  342550 provision.go:87] duration metric: took 1.227107421s to configureAuth
	I1026 08:42:54.218140  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:42:54.218410  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:54.218534  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.236086  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:54.236409  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:54.236427  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:42:54.568914  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:42:54.568937  342550 machine.go:96] duration metric: took 5.103139338s to provisionDockerMachine
	I1026 08:42:54.568948  342550 start.go:293] postStartSetup for "ha-232402" (driver="docker")
	I1026 08:42:54.568959  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:42:54.569025  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:42:54.569071  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.593317  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.698695  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:42:54.702088  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:42:54.702117  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:42:54.702129  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:42:54.702512  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:42:54.702614  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:42:54.702623  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:42:54.702789  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:42:54.713617  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:54.730927  342550 start.go:296] duration metric: took 161.96257ms for postStartSetup
	I1026 08:42:54.731067  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:42:54.731128  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.748393  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.851766  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:42:54.857035  342550 fix.go:56] duration metric: took 5.708668211s for fixHost
	I1026 08:42:54.857061  342550 start.go:83] releasing machines lock for "ha-232402", held for 5.708719658s
	I1026 08:42:54.857136  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:54.874075  342550 ssh_runner.go:195] Run: cat /version.json
	I1026 08:42:54.874138  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.874395  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:42:54.874465  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.896310  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.897209  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:55.096305  342550 ssh_runner.go:195] Run: systemctl --version
	I1026 08:42:55.103174  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:42:55.140113  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:42:55.144490  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:42:55.144568  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:42:55.152609  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:42:55.152677  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:42:55.152720  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:42:55.152774  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:42:55.168885  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:42:55.183022  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:42:55.183092  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:42:55.199361  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:42:55.212983  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:42:55.329311  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:42:55.439788  342550 docker.go:234] disabling docker service ...
	I1026 08:42:55.439882  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:42:55.455129  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:42:55.468360  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:42:55.591545  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:42:55.712355  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:42:55.725339  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:42:55.739516  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:42:55.739619  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.748984  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:42:55.749080  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.758145  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.767369  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.776548  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:42:55.784814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.794122  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.802447  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.811302  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:42:55.818789  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:42:55.826164  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:55.945131  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:42:56.073628  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:42:56.073791  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:42:56.077812  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:42:56.077890  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:42:56.081474  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:42:56.106451  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:42:56.106572  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.135851  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.170040  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:42:56.172899  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:42:56.189266  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:42:56.192940  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.202818  342550 kubeadm.go:883] updating cluster {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:42:56.202967  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:56.203031  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.242649  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.242675  342550 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:42:56.242785  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.267929  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.267952  342550 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:42:56.267962  342550 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 08:42:56.268090  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:42:56.268186  342550 ssh_runner.go:195] Run: crio config
	I1026 08:42:56.329063  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:56.329091  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:56.329119  342550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:42:56.329143  342550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-232402 NodeName:ha-232402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:42:56.329378  342550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-232402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:42:56.329404  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:42:56.329467  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:42:56.341574  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:56.341697  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:42:56.341768  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:42:56.350317  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:42:56.350440  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 08:42:56.358169  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1026 08:42:56.371463  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:42:56.384425  342550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1026 08:42:56.397225  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:42:56.410169  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:42:56.413685  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.423463  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:56.541144  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:42:56.557207  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.2
	I1026 08:42:56.557272  342550 certs.go:195] generating shared ca certs ...
	I1026 08:42:56.557303  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:56.557467  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:42:56.557541  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:42:56.557576  342550 certs.go:257] generating profile certs ...
	I1026 08:42:56.557692  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:42:56.557760  342550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea
	I1026 08:42:56.557782  342550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1026 08:42:57.202922  342550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea ...
	I1026 08:42:57.202955  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea: {Name:mk933c6500306ddc2c8fa2cedfd5052423ec2536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203128  342550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea ...
	I1026 08:42:57.203144  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea: {Name:mkf5c2bd5c725d62808b0af7cfa80f3d97af9f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203241  342550 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt
	I1026 08:42:57.204200  342550 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key
	I1026 08:42:57.204356  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:42:57.204376  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:42:57.204394  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:42:57.204414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:42:57.204432  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:42:57.204452  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:42:57.204471  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:42:57.204482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:42:57.204496  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:42:57.204543  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:42:57.204577  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:42:57.204589  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:42:57.204613  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:42:57.204639  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:42:57.204664  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:42:57.204710  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:57.204740  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.204757  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.204770  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.205388  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:42:57.231752  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:42:57.264536  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:42:57.295902  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:42:57.324874  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:42:57.356420  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:42:57.393782  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:42:57.430094  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:42:57.476853  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:42:57.514216  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:42:57.542038  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:42:57.573718  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:42:57.596671  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:42:57.604302  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:42:57.620193  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624096  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624163  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.684171  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:42:57.692726  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:42:57.703409  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709875  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709939  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.761720  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:42:57.770155  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:42:57.782379  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786510  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786589  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.842092  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:42:57.850459  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:42:57.854127  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:42:57.922143  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:42:57.991084  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:42:58.032484  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:42:58.075471  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:42:58.119880  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:42:58.162522  342550 kubeadm.go:400] StartCluster: {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:58.162655  342550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:42:58.162737  342550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:42:58.219487  342550 cri.go:89] found id: "b61c82cad7fbfa81b5335ff117e6fd6ed77be750be18b2795baad05c04597be3"
	I1026 08:42:58.219510  342550 cri.go:89] found id: "1c8917dd6e25dfe8420b3a3b324ba48edc068e4197ed8c758044d6818d9f3ba7"
	I1026 08:42:58.219516  342550 cri.go:89] found id: "7a416fdc86cf67bda0bfabac32d527db13c8586bd8ae683896061d13e70b3bf2"
	I1026 08:42:58.219520  342550 cri.go:89] found id: "f20afdb6dc9568c5fef5900fd16550aaeceaace97af19ff784772913a96da43b"
	I1026 08:42:58.219523  342550 cri.go:89] found id: "1902c617979ded8ef7430e8c9f9735ce1b420b6259bcc8d54001ef6f37f1fd3f"
	I1026 08:42:58.219526  342550 cri.go:89] found id: ""
	I1026 08:42:58.219576  342550 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:42:58.231211  342550 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:42:58Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:42:58.231293  342550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:42:58.239815  342550 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:42:58.239836  342550 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:42:58.239895  342550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:42:58.252247  342550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:58.252648  342550 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-232402" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.252758  342550 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "ha-232402" cluster setting kubeconfig missing "ha-232402" context setting]
	I1026 08:42:58.253044  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.253554  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:42:58.254045  342550 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:42:58.254065  342550 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:42:58.254095  342550 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:42:58.254103  342550 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:42:58.254108  342550 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:42:58.254472  342550 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1026 08:42:58.256702  342550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:42:58.269972  342550 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1026 08:42:58.269997  342550 kubeadm.go:601] duration metric: took 30.154432ms to restartPrimaryControlPlane
	I1026 08:42:58.270006  342550 kubeadm.go:402] duration metric: took 107.493524ms to StartCluster
	I1026 08:42:58.270028  342550 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270094  342550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.270678  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270895  342550 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:42:58.270923  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:42:58.270932  342550 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:42:58.271445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.276967  342550 out.go:179] * Enabled addons: 
	I1026 08:42:58.279988  342550 addons.go:514] duration metric: took 9.042438ms for enable addons: enabled=[]
	I1026 08:42:58.280034  342550 start.go:246] waiting for cluster config update ...
	I1026 08:42:58.280044  342550 start.go:255] writing updated cluster config ...
	I1026 08:42:58.283287  342550 out.go:203] 
	I1026 08:42:58.286419  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.286541  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.289808  342550 out.go:179] * Starting "ha-232402-m02" control-plane node in "ha-232402" cluster
	I1026 08:42:58.292646  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:58.295642  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:58.298397  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:58.298422  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:58.298528  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:58.298543  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:58.298666  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.298902  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:58.334398  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:58.334424  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:58.334438  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:58.334461  342550 start.go:360] acquireMachinesLock for ha-232402-m02: {Name:mkcee86299772a936378440a31e878294fbfa9f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:58.334510  342550 start.go:364] duration metric: took 34.667µs to acquireMachinesLock for "ha-232402-m02"
	I1026 08:42:58.334530  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:58.334535  342550 fix.go:54] fixHost starting: m02
	I1026 08:42:58.334809  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.368471  342550 fix.go:112] recreateIfNeeded on ha-232402-m02: state=Stopped err=<nil>
	W1026 08:42:58.368496  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:58.371679  342550 out.go:252] * Restarting existing docker container for "ha-232402-m02" ...
	I1026 08:42:58.371767  342550 cli_runner.go:164] Run: docker start ha-232402-m02
	I1026 08:42:58.772810  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.801152  342550 kic.go:430] container "ha-232402-m02" state is running.
	I1026 08:42:58.801522  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:42:58.832989  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.833245  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:58.833311  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:42:58.867008  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:58.867344  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:42:58.867353  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:58.868022  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:02.066423  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.066511  342550 ubuntu.go:182] provisioning hostname "ha-232402-m02"
	I1026 08:43:02.066610  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.100484  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.100810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.100821  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m02 && echo "ha-232402-m02" | sudo tee /etc/hostname
	I1026 08:43:02.308004  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.308166  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.334891  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.335210  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.335226  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:02.514818  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:02.514905  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:02.514937  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:02.514979  342550 provision.go:84] configureAuth start
	I1026 08:43:02.515065  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:02.560373  342550 provision.go:143] copyHostCerts
	I1026 08:43:02.560414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560461  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:02.560470  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560546  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:02.560626  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560643  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:02.560648  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560672  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:02.560715  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560731  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:02.560735  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560758  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:02.560803  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m02 san=[127.0.0.1 192.168.49.3 ha-232402-m02 localhost minikube]
	I1026 08:43:03.208517  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:03.208589  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:03.208637  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.226696  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.338996  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:03.339064  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:43:03.364234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:03.364299  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:03.392294  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:03.392357  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:03.425649  342550 provision.go:87] duration metric: took 910.644183ms to configureAuth
	I1026 08:43:03.425677  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:03.425959  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:03.426065  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.458884  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:03.459198  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:03.459218  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:03.839944  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:03.839966  342550 machine.go:96] duration metric: took 5.006711527s to provisionDockerMachine
	I1026 08:43:03.839977  342550 start.go:293] postStartSetup for "ha-232402-m02" (driver="docker")
	I1026 08:43:03.839988  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:03.840046  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:03.840113  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.857989  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.966802  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:03.970325  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:03.970356  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:03.970368  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:03.970455  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:03.970594  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:03.970609  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:03.970707  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:03.978929  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:04.001526  342550 start.go:296] duration metric: took 161.533931ms for postStartSetup
	I1026 08:43:04.001644  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:04.001711  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.029362  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.147606  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:04.158657  342550 fix.go:56] duration metric: took 5.824113305s for fixHost
	I1026 08:43:04.158679  342550 start.go:83] releasing machines lock for "ha-232402-m02", held for 5.824161494s
	I1026 08:43:04.158852  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:04.190337  342550 out.go:179] * Found network options:
	I1026 08:43:04.193487  342550 out.go:179]   - NO_PROXY=192.168.49.2
	W1026 08:43:04.196584  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:04.196654  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:04.196729  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:04.196774  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.197012  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:04.197069  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.241682  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.251119  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.602534  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:04.612399  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:04.612470  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:04.625469  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:04.625494  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:04.625529  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:04.625585  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:04.650032  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:04.672644  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:04.672717  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:04.691930  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:04.713738  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:04.895936  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:05.091815  342550 docker.go:234] disabling docker service ...
	I1026 08:43:05.091890  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:05.117939  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:05.141552  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:05.385159  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:05.717724  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:05.754449  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:05.787254  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:05.787365  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.812135  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:05.812208  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.833814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.869621  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.895385  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:05.916665  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.945670  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.979261  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:06.007406  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:06.024152  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:06.048022  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:06.407451  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:43:07.762107  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.354620144s)
	I1026 08:43:07.762151  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:43:07.762206  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:43:07.766031  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:43:07.766103  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:43:07.769733  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:43:07.814809  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:43:07.814907  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.866941  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.921153  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:43:07.924118  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:43:07.927047  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:43:07.969779  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:43:07.973594  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:07.987191  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:43:07.987445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:07.987717  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:43:08.008779  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:43:08.009283  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.3
	I1026 08:43:08.009300  342550 certs.go:195] generating shared ca certs ...
	I1026 08:43:08.009316  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:43:08.009468  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:43:08.009524  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:43:08.009531  342550 certs.go:257] generating profile certs ...
	I1026 08:43:08.009619  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:43:08.009879  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.fae769c1
	I1026 08:43:08.009932  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:43:08.009943  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:43:08.009956  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:43:08.009967  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:43:08.009979  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:43:08.009990  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:43:08.010002  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:43:08.010014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:43:08.010024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:43:08.010077  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:43:08.010105  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:43:08.010112  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:43:08.010135  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:43:08.010156  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:43:08.010177  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:43:08.010236  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:08.010266  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.010279  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.010289  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.010370  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:43:08.032241  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:43:08.139306  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:43:08.144038  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:43:08.155846  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:43:08.160324  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:43:08.170065  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:43:08.174060  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:43:08.188168  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:43:08.192073  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:43:08.200629  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:43:08.205998  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:43:08.216901  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:43:08.221162  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:43:08.231147  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:43:08.250111  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:43:08.269251  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:43:08.288444  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:43:08.306389  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:43:08.325763  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:43:08.345171  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:43:08.363276  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:43:08.388034  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:43:08.407557  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:43:08.426288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:43:08.445629  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:43:08.459889  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:43:08.474059  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:43:08.487641  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:43:08.501076  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:43:08.514660  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:43:08.530178  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:43:08.543653  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:43:08.551337  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:43:08.559877  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563863  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563978  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.606128  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:43:08.614418  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:43:08.622608  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626862  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626984  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.668441  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:43:08.678228  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:43:08.694156  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699405  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699525  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.741501  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:43:08.749451  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:43:08.753614  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:43:08.794639  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:43:08.835994  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:43:08.884952  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:43:08.929998  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:43:08.973568  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:43:09.018771  342550 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1026 08:43:09.018901  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:43:09.018964  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:43:09.019040  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:43:09.033326  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:43:09.033397  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:43:09.033460  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:43:09.042327  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:43:09.042441  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:43:09.053364  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:43:09.067913  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:43:09.083307  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:43:09.097627  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:43:09.102025  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:09.114414  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.252566  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.267980  342550 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:43:09.268336  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:09.273239  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:43:09.276128  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.414962  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.429491  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:43:09.429623  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:43:09.429932  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238867  342550 node_ready.go:49] node "ha-232402-m02" is "Ready"
	I1026 08:43:27.238899  342550 node_ready.go:38] duration metric: took 17.808924366s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238912  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:43:27.238976  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:43:27.263231  342550 api_server.go:72] duration metric: took 17.995203495s to wait for apiserver process to appear ...
	I1026 08:43:27.263257  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:43:27.263278  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.286625  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:43:27.286661  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:43:27.763965  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.797733  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:27.797765  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.264086  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.272772  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.272800  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.763318  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.773873  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.773903  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:29.263609  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:29.271856  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:43:29.272939  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:43:29.272963  342550 api_server.go:131] duration metric: took 2.009698678s to wait for apiserver health ...
	I1026 08:43:29.272972  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:43:29.286570  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:43:29.286609  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286618  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286624  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.286629  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.286634  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.286639  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.286644  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.286648  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.286659  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.286666  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.286679  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.286684  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.286690  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.286698  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.286704  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.286759  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.286774  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.286779  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.286784  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.286790  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.286797  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.286807  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.286813  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.286818  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.286824  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.286830  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.286835  342550 system_pods.go:74] duration metric: took 13.857629ms to wait for pod list to return data ...
	I1026 08:43:29.286845  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:43:29.292456  342550 default_sa.go:45] found service account: "default"
	I1026 08:43:29.292483  342550 default_sa.go:55] duration metric: took 5.6309ms for default service account to be created ...
	I1026 08:43:29.292493  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:43:29.303662  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:43:29.303699  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303711  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303717  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.303722  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.303726  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.303731  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.303736  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.303741  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.303745  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.303755  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.303760  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.303771  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.303778  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.303783  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.303788  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.303793  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.303796  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.303800  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.303804  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.303810  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.303815  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.303819  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.303823  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.303827  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.303830  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.303834  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.303840  342550 system_pods.go:126] duration metric: took 11.341628ms to wait for k8s-apps to be running ...
	I1026 08:43:29.303854  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:43:29.303908  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:43:29.323431  342550 system_svc.go:56] duration metric: took 19.574494ms WaitForService to wait for kubelet
	I1026 08:43:29.323460  342550 kubeadm.go:586] duration metric: took 20.055438295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:43:29.323478  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:43:29.333801  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333841  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333854  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333859  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333864  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333868  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333872  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333876  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333881  342550 node_conditions.go:105] duration metric: took 10.39707ms to run NodePressure ...
	I1026 08:43:29.333892  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:43:29.333919  342550 start.go:255] writing updated cluster config ...
	I1026 08:43:29.337457  342550 out.go:203] 
	I1026 08:43:29.340743  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:29.340922  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.344362  342550 out.go:179] * Starting "ha-232402-m03" control-plane node in "ha-232402" cluster
	I1026 08:43:29.348018  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:43:29.351781  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:43:29.354814  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:43:29.354918  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:43:29.354883  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:43:29.355255  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:43:29.355280  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:43:29.355447  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.375411  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:43:29.375429  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:43:29.375442  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:43:29.375466  342550 start.go:360] acquireMachinesLock for ha-232402-m03: {Name:mk956b02a4f725f23f9fb3f2ce92112bc2e1b45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:43:29.375516  342550 start.go:364] duration metric: took 35.873µs to acquireMachinesLock for "ha-232402-m03"
	I1026 08:43:29.375534  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:43:29.375540  342550 fix.go:54] fixHost starting: m03
	I1026 08:43:29.375948  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.401895  342550 fix.go:112] recreateIfNeeded on ha-232402-m03: state=Stopped err=<nil>
	W1026 08:43:29.401920  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:43:29.405493  342550 out.go:252] * Restarting existing docker container for "ha-232402-m03" ...
	I1026 08:43:29.405580  342550 cli_runner.go:164] Run: docker start ha-232402-m03
	I1026 08:43:29.812599  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.835988  342550 kic.go:430] container "ha-232402-m03" state is running.
	I1026 08:43:29.836452  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:29.866387  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.866681  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:43:29.866829  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:29.906362  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:29.906690  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:29.907638  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:43:29.908402  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:33.170636  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.170746  342550 ubuntu.go:182] provisioning hostname "ha-232402-m03"
	I1026 08:43:33.170851  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.206417  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.206830  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.206844  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m03 && echo "ha-232402-m03" | sudo tee /etc/hostname
	I1026 08:43:33.524716  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.524858  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.549504  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.549810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.549827  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:33.856044  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:33.856113  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:33.856146  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:33.856188  342550 provision.go:84] configureAuth start
	I1026 08:43:33.856287  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:33.880087  342550 provision.go:143] copyHostCerts
	I1026 08:43:33.880126  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880159  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:33.880166  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880246  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:33.880325  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880342  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:33.880346  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880369  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:33.880408  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880423  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:33.880427  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880448  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:33.880491  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m03 san=[127.0.0.1 192.168.49.4 ha-232402-m03 localhost minikube]
	I1026 08:43:34.115589  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:34.115701  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:34.115779  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.133889  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:34.307782  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:34.307842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:34.361519  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:34.361585  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:34.420419  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:34.420486  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:43:34.479633  342550 provision.go:87] duration metric: took 623.414755ms to configureAuth
	I1026 08:43:34.479699  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:34.479974  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:34.480118  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.505756  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:34.506063  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:34.506078  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:34.934452  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:34.934518  342550 machine.go:96] duration metric: took 5.067825426s to provisionDockerMachine
	I1026 08:43:34.934546  342550 start.go:293] postStartSetup for "ha-232402-m03" (driver="docker")
	I1026 08:43:34.934571  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:34.934666  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:34.934854  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.954917  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.082367  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:35.089885  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:35.090161  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:35.090176  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:35.090254  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:35.090369  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:35.090381  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:35.090546  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:35.101842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:35.125627  342550 start.go:296] duration metric: took 191.050639ms for postStartSetup
	I1026 08:43:35.125778  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:35.125843  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.147102  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.264825  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:35.271672  342550 fix.go:56] duration metric: took 5.896121251s for fixHost
	I1026 08:43:35.271696  342550 start.go:83] releasing machines lock for "ha-232402-m03", held for 5.89617159s
	I1026 08:43:35.271770  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:35.297127  342550 out.go:179] * Found network options:
	I1026 08:43:35.302967  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1026 08:43:35.306003  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306038  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306066  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306091  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:35.306177  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:35.306229  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.306517  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:35.306579  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.328577  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.334791  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.497414  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:35.553666  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:35.553760  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:35.566215  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:35.566249  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:35.566284  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:35.566344  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:35.592142  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:35.609686  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:35.609758  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:35.630610  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:35.655250  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:35.914838  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:36.134783  342550 docker.go:234] disabling docker service ...
	I1026 08:43:36.134897  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:36.155549  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:36.173043  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:36.485618  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:36.970002  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:37.017784  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:37.075903  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:37.075984  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.109912  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:37.110012  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.149021  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.175380  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.186219  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:37.221818  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.248314  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.265224  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.288935  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:37.303925  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:37.319373  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:37.587508  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:07.934759  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.347172836s)
	I1026 08:45:07.934786  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:07.934837  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:07.939538  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:07.939605  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:07.943575  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:07.968256  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:07.968338  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:07.998587  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:08.044252  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:08.047310  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:08.050469  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:08.053493  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:08.069256  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:08.074145  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:08.085961  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:08.086231  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:08.086536  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:08.111717  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:08.112059  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.4
	I1026 08:45:08.112073  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:08.112098  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:08.112222  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:08.112268  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:08.112279  342550 certs.go:257] generating profile certs ...
	I1026 08:45:08.112378  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:45:08.112451  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.aa893e80
	I1026 08:45:08.112494  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:45:08.112511  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:08.112532  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:08.112560  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:08.112589  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:08.112605  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:45:08.112627  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:45:08.112645  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:45:08.112660  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:45:08.112746  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:08.112782  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:08.112801  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:08.112842  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:08.112879  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:08.112910  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:08.112969  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:08.113008  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.113024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.113046  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.113130  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:45:08.132367  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:45:08.231029  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:45:08.235028  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:45:08.244659  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:45:08.249599  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:45:08.261474  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:45:08.266790  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:45:08.276538  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:45:08.280256  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:45:08.289634  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:45:08.293405  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:45:08.301646  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:45:08.305975  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:45:08.315022  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:08.338065  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:08.356967  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:08.380657  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:08.402274  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:45:08.422301  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:45:08.441783  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:45:08.461742  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:45:08.481814  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:08.502025  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:08.521895  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:08.542103  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:45:08.555693  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:45:08.570653  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:45:08.588674  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:45:08.602475  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:45:08.616618  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:45:08.630309  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:45:08.645565  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:08.652358  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:08.661564  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665847  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665967  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.709135  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:08.717967  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:08.727059  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731470  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731567  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.774541  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:08.784749  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:08.793805  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797757  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797878  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.841551  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:08.850068  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:08.854034  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:45:08.895708  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:45:08.942061  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:45:08.984630  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:45:09.028757  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:45:09.071885  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:45:09.113415  342550 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1026 08:45:09.113537  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:09.113588  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:45:09.113648  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:45:09.127980  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:45:09.128041  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:45:09.128109  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:09.136574  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:09.136660  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:45:09.145279  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:09.159587  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:09.174486  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:45:09.192617  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:09.196600  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:09.206757  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.371220  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.388111  342550 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:45:09.388597  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:09.391505  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:09.394393  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.549234  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.565513  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:09.565648  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:09.567107  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m03" to be "Ready" ...
	W1026 08:45:11.571085  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:13.571335  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:16.071949  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	I1026 08:45:16.573590  342550 node_ready.go:49] node "ha-232402-m03" is "Ready"
	I1026 08:45:16.573675  342550 node_ready.go:38] duration metric: took 7.00653579s for node "ha-232402-m03" to be "Ready" ...
	I1026 08:45:16.573704  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:45:16.573795  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:45:16.595522  342550 api_server.go:72] duration metric: took 7.20735956s to wait for apiserver process to appear ...
	I1026 08:45:16.595595  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:45:16.595631  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:45:16.604035  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:45:16.604987  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:45:16.605006  342550 api_server.go:131] duration metric: took 9.390023ms to wait for apiserver health ...
	I1026 08:45:16.605015  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:45:16.613936  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:45:16.614018  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.614048  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.614068  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.614088  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.614126  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.614144  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.614163  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.614182  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.614216  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.614236  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.614257  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.614277  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.614312  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.614337  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.614368  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.614385  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.614414  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.614446  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.614463  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.614481  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.614508  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.614538  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.614557  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.614577  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.614614  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.614633  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.614654  342550 system_pods.go:74] duration metric: took 9.633315ms to wait for pod list to return data ...
	I1026 08:45:16.614688  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:45:16.617833  342550 default_sa.go:45] found service account: "default"
	I1026 08:45:16.617904  342550 default_sa.go:55] duration metric: took 3.173782ms for default service account to be created ...
	I1026 08:45:16.617928  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:45:16.715675  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:45:16.715759  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.715782  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.715824  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.715843  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.715864  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.715937  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.715954  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.715984  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.716013  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.716032  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.716052  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.716092  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.716112  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.716133  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.716170  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.716191  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.716210  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.716229  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.716260  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.716280  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.716302  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.716341  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.716362  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.716380  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.716399  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.716435  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.716457  342550 system_pods.go:126] duration metric: took 98.51028ms to wait for k8s-apps to be running ...
	I1026 08:45:16.716492  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:16.716578  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:16.737535  342550 system_svc.go:56] duration metric: took 21.034459ms WaitForService to wait for kubelet
	I1026 08:45:16.737613  342550 kubeadm.go:586] duration metric: took 7.349454949s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:16.737646  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:16.742538  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742622  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742649  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742689  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742708  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742751  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742771  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742799  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742848  342550 node_conditions.go:105] duration metric: took 5.158408ms to run NodePressure ...
	I1026 08:45:16.742874  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:16.742923  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:16.748453  342550 out.go:203] 
	I1026 08:45:16.751669  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:16.751857  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.755487  342550 out.go:179] * Starting "ha-232402-m04" worker node in "ha-232402" cluster
	I1026 08:45:16.760316  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:45:16.763382  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:45:16.766507  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:45:16.766623  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:45:16.766588  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:45:16.767053  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:45:16.767077  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:45:16.767235  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.789140  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:45:16.789160  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:45:16.789172  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:45:16.789196  342550 start.go:360] acquireMachinesLock for ha-232402-m04: {Name:mk15269e9a15e15636295a3a12cc05426ca8566d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:45:16.789248  342550 start.go:364] duration metric: took 36.217µs to acquireMachinesLock for "ha-232402-m04"
	I1026 08:45:16.789267  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:45:16.789272  342550 fix.go:54] fixHost starting: m04
	I1026 08:45:16.789524  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:16.816258  342550 fix.go:112] recreateIfNeeded on ha-232402-m04: state=Stopped err=<nil>
	W1026 08:45:16.816289  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:45:16.819903  342550 out.go:252] * Restarting existing docker container for "ha-232402-m04" ...
	I1026 08:45:16.820003  342550 cli_runner.go:164] Run: docker start ha-232402-m04
	I1026 08:45:17.136467  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:17.172522  342550 kic.go:430] container "ha-232402-m04" state is running.
	I1026 08:45:17.173106  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:17.210858  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:17.211110  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:45:17.212380  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:17.248960  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:17.249254  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:17.249263  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:45:17.250106  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43382->127.0.0.1:33195: read: connection reset by peer
	I1026 08:45:20.411022  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.411054  342550 ubuntu.go:182] provisioning hostname "ha-232402-m04"
	I1026 08:45:20.411151  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.437224  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.437615  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.437634  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m04 && echo "ha-232402-m04" | sudo tee /etc/hostname
	I1026 08:45:20.606470  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.606623  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.637294  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.637715  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.637737  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:45:20.795267  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:45:20.795294  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:45:20.795316  342550 ubuntu.go:190] setting up certificates
	I1026 08:45:20.795325  342550 provision.go:84] configureAuth start
	I1026 08:45:20.795388  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:20.814347  342550 provision.go:143] copyHostCerts
	I1026 08:45:20.814401  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814441  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:45:20.814454  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814537  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:45:20.814631  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814656  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:45:20.814661  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814687  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:45:20.814798  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814828  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:45:20.814842  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814869  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:45:20.814924  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m04 san=[127.0.0.1 192.168.49.5 ha-232402-m04 localhost minikube]
	I1026 08:45:21.016159  342550 provision.go:177] copyRemoteCerts
	I1026 08:45:21.016235  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:45:21.016281  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.041440  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.148014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:45:21.148076  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:45:21.172598  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:45:21.172660  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:45:21.199069  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:45:21.199134  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:45:21.219310  342550 provision.go:87] duration metric: took 423.970968ms to configureAuth
	I1026 08:45:21.219338  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:45:21.219574  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:21.219685  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.244539  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:21.244932  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:21.244952  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:45:21.600980  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:45:21.601003  342550 machine.go:96] duration metric: took 4.389678213s to provisionDockerMachine
	I1026 08:45:21.601016  342550 start.go:293] postStartSetup for "ha-232402-m04" (driver="docker")
	I1026 08:45:21.601027  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:45:21.601089  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:45:21.601135  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.623066  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.735340  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:45:21.738667  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:45:21.738698  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:45:21.738751  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:45:21.738812  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:45:21.738908  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:45:21.738919  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:45:21.739032  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:45:21.746960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:21.766329  342550 start.go:296] duration metric: took 165.296455ms for postStartSetup
	I1026 08:45:21.766414  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:45:21.766453  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.787386  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.899980  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:45:21.907890  342550 fix.go:56] duration metric: took 5.118610324s for fixHost
	I1026 08:45:21.907917  342550 start.go:83] releasing machines lock for "ha-232402-m04", held for 5.118661688s
	I1026 08:45:21.907988  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:21.933326  342550 out.go:179] * Found network options:
	I1026 08:45:21.936320  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1026 08:45:21.940256  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940294  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940306  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940340  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940357  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940368  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:45:21.940442  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:45:21.940486  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.940766  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:45:21.940826  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.972410  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.978079  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:22.149485  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:45:22.200194  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:45:22.200337  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:45:22.209074  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:45:22.209098  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:45:22.209131  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:45:22.209180  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:45:22.227970  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:45:22.260018  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:45:22.260091  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:45:22.280501  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:45:22.296013  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:45:22.435097  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:45:22.583385  342550 docker.go:234] disabling docker service ...
	I1026 08:45:22.583454  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:45:22.599821  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:45:22.618049  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:45:22.760465  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:45:22.913374  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:45:22.930530  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:45:22.946115  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:45:22.946198  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.955712  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:45:22.955791  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.967161  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.978701  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.988107  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:45:22.999250  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.011010  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.021614  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.033901  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:45:23.047274  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:45:23.055227  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:23.187258  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:23.348936  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:23.349088  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:23.353170  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:23.353242  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:23.356804  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:23.401811  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:23.401919  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.436307  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.473208  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:23.476075  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:23.478893  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:23.481820  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1026 08:45:23.484818  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:23.504854  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:23.509411  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.519797  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:23.520052  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:23.520336  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:23.539958  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:23.540265  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.5
	I1026 08:45:23.540275  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:23.540293  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:23.540418  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:23.540465  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:23.540482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:23.540497  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:23.540515  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:23.540528  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:23.540600  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:23.540638  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:23.540660  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:23.540691  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:23.540724  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:23.540753  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:23.540804  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:23.540835  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.540850  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.540862  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.540886  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:23.560629  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:23.585421  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:23.605705  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:23.632934  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:23.654288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:23.674771  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:23.693831  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:23.700411  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:23.709558  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716080  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716173  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.758415  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:23.767708  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:23.779057  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784321  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784454  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.831578  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:23.841350  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:23.850606  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854695  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854826  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.898173  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:23.906572  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:23.910323  342550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:45:23.910364  342550 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1026 08:45:23.910446  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:23.910505  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:23.920573  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:23.920679  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1026 08:45:23.932673  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:23.947328  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:23.969163  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:23.973466  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.984606  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.155134  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.171153  342550 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1026 08:45:24.171549  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:24.174346  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:24.177303  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.343470  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.368034  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:24.368111  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:24.368387  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872447  342550 node_ready.go:49] node "ha-232402-m04" is "Ready"
	I1026 08:45:25.872476  342550 node_ready.go:38] duration metric: took 1.504072228s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872489  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:25.872631  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:25.886146  342550 system_svc.go:56] duration metric: took 13.648567ms WaitForService to wait for kubelet
	I1026 08:45:25.886178  342550 kubeadm.go:586] duration metric: took 1.714983841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:25.886197  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:25.890052  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890084  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890096  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890101  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890106  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890116  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890120  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890125  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890130  342550 node_conditions.go:105] duration metric: took 3.927915ms to run NodePressure ...
	I1026 08:45:25.890147  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:25.890180  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:25.890539  342550 ssh_runner.go:195] Run: rm -f paused
	I1026 08:45:25.897547  342550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:45:25.898046  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:45:25.914674  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921403  342550 pod_ready.go:94] pod "coredns-66bc5c9577-d4htv" is "Ready"
	I1026 08:45:25.921528  342550 pod_ready.go:86] duration metric: took 6.710293ms for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921572  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.928323  342550 pod_ready.go:94] pod "coredns-66bc5c9577-vctcf" is "Ready"
	I1026 08:45:25.928388  342550 pod_ready.go:86] duration metric: took 6.794421ms for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.931541  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938566  342550 pod_ready.go:94] pod "etcd-ha-232402" is "Ready"
	I1026 08:45:25.938593  342550 pod_ready.go:86] duration metric: took 7.022993ms for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938603  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944339  342550 pod_ready.go:94] pod "etcd-ha-232402-m02" is "Ready"
	I1026 08:45:25.944373  342550 pod_ready.go:86] duration metric: took 5.762714ms for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944383  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:26.098602  342550 request.go:683] "Waited before sending request" delay="154.1318ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.299278  342550 request.go:683] "Waited before sending request" delay="197.131159ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:26.498654  342550 request.go:683] "Waited before sending request" delay="53.17348ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.699396  342550 request.go:683] "Waited before sending request" delay="197.322103ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:27.099200  342550 request.go:683] "Waited before sending request" delay="150.305147ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:27.952681  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:30.450341  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:32.451378  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:34.951997  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:36.952338  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:38.952753  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:41.452152  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:43.951084  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:45.956575  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:48.451391  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:50.451685  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:52.950573  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:54.951442  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:45:56.952674  342550 pod_ready.go:94] pod "etcd-ha-232402-m03" is "Ready"
	I1026 08:45:56.952698  342550 pod_ready.go:86] duration metric: took 31.008309673s for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.957384  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966004  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402" is "Ready"
	I1026 08:45:56.966072  342550 pod_ready.go:86] duration metric: took 8.662888ms for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966104  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973739  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m02" is "Ready"
	I1026 08:45:56.973764  342550 pod_ready.go:86] duration metric: took 7.640413ms for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973773  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.981079  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m03" is "Ready"
	I1026 08:45:56.981103  342550 pod_ready.go:86] duration metric: took 7.323871ms for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.985549  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.145955  342550 request.go:683] "Waited before sending request" delay="160.263354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402"
	I1026 08:45:57.345448  342550 request.go:683] "Waited before sending request" delay="176.112448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402"
	I1026 08:45:57.350017  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402" is "Ready"
	I1026 08:45:57.350048  342550 pod_ready.go:86] duration metric: took 364.42267ms for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.350058  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.545478  342550 request.go:683] "Waited before sending request" delay="195.318809ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m02"
	I1026 08:45:57.746036  342550 request.go:683] "Waited before sending request" delay="196.306126ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m02"
	I1026 08:45:57.749268  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402-m02" is "Ready"
	I1026 08:45:57.749295  342550 pod_ready.go:86] duration metric: took 399.228382ms for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.749305  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.945742  342550 request.go:683] "Waited before sending request" delay="196.324022ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.145179  342550 request.go:683] "Waited before sending request" delay="195.240885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.346153  342550 request.go:683] "Waited before sending request" delay="96.402716ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.545837  342550 request.go:683] "Waited before sending request" delay="196.140702ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.946129  342550 request.go:683] "Waited before sending request" delay="192.251793ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:59.345416  342550 request.go:683] "Waited before sending request" delay="92.227487ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:59.755924  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:01.756440  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:03.756734  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:06.263222  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:08.756233  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:10.759737  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:13.262615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:15.263768  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:17.761879  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:20.266086  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:22.755536  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:24.756371  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:27.265289  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:29.756416  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:32.261261  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:34.278965  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:36.756714  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:39.255754  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:41.260562  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:43.263679  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:45.756223  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:47.762407  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:50.257781  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:52.261309  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:54.266882  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:56.756901  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:59.265385  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:01.266136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:03.755443  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:05.755740  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:07.757293  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:10.261769  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:12.263710  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:14.265412  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:16.757171  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:19.259551  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:21.267428  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:23.756993  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:26.257777  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:28.263354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:30.757700  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:33.260488  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:35.261687  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:37.266110  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:39.756367  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:41.759474  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:44.258485  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:46.259773  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:48.269451  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:50.756558  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:53.259529  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:55.261684  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:57.264250  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:59.268741  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:01.756567  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:04.263036  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:06.758354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:09.263247  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:11.263720  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:13.759643  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:16.263136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:18.762943  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:21.264362  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:23.756305  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:26.262469  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:28.265988  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:30.756780  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:33.263227  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:35.756771  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:37.759839  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:40.258685  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:42.265762  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:44.756117  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:46.757380  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:49.258967  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:51.259693  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:53.265397  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:55.755930  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:57.758343  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:00.294615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:02.756143  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:05.263920  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:07.756279  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:09.757243  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:12.261285  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:14.756800  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:16.756845  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:19.265272  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:21.756019  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:23.756649  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:25.756946  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:49:25.898241  342550 pod_ready.go:86] duration metric: took 3m28.14891381s for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:49:25.898285  342550 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1026 08:49:25.898319  342550 pod_ready.go:40] duration metric: took 4m0.000740057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:49:25.901244  342550 out.go:203] 
	W1026 08:49:25.904226  342550 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1026 08:49:25.907092  342550 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-232402 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-232402
helpers_test.go:243: (dbg) docker inspect ha-232402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7",
	        "Created": "2025-10-26T08:34:55.36697254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342678,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:42:49.204063246Z",
	            "FinishedAt": "2025-10-26T08:42:48.58778224Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/hosts",
	        "LogPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7-json.log",
	        "Name": "/ha-232402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-232402:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-232402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7",
	                "LowerDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-232402",
	                "Source": "/var/lib/docker/volumes/ha-232402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-232402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-232402",
	                "name.minikube.sigs.k8s.io": "ha-232402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11edc866359e31e24dcf48093b22c87ec8e166cbe22464af2be8dced4da00649",
	            "SandboxKey": "/var/run/docker/netns/11edc866359e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-232402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:6e:fd:3d:05:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "256d72a548e060b98ca9fad9f40f3f0a50de572a247e0c2982ac187e2f8a5408",
	                    "EndpointID": "ba20b0b86725488764c95d576ab973385f59579e0c1710b1b409044428d2982b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-232402",
	                        "601e5c9ab7d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-232402 -n ha-232402
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 logs -n 25: (1.759298315s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402-m03_ha-232402-m02.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m02 sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m02.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402-m04:/home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp testdata/cp-test.txt ha-232402-m04:/home/docker/cp-test.txt                                                             │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402-m04.txt │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402:/home/docker/cp-test_ha-232402-m04_ha-232402.txt                       │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402.txt                                                 │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m02 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m03:/home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ node    │ ha-232402 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ node    │ ha-232402 node start m02 --alsologtostderr -v 5                                                                                      │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:42 UTC │
	│ node    │ ha-232402 node list --alsologtostderr -v 5                                                                                           │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │                     │
	│ stop    │ ha-232402 stop --alsologtostderr -v 5                                                                                                │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │ 26 Oct 25 08:42 UTC │
	│ start   │ ha-232402 start --wait true --alsologtostderr -v 5                                                                                   │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │                     │
	│ node    │ ha-232402 node list --alsologtostderr -v 5                                                                                           │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:42:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:42:48.917934  342550 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:42:48.918170  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918204  342550 out.go:374] Setting ErrFile to fd 2...
	I1026 08:42:48.918225  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918525  342550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:42:48.918983  342550 out.go:368] Setting JSON to false
	I1026 08:42:48.919916  342550 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8719,"bootTime":1761459450,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:42:48.920018  342550 start.go:141] virtualization:  
	I1026 08:42:48.923144  342550 out.go:179] * [ha-232402] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:42:48.927011  342550 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:42:48.927093  342550 notify.go:220] Checking for updates...
	I1026 08:42:48.933001  342550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:42:48.935959  342550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:48.939045  342550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:42:48.941971  342550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:42:48.944900  342550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:42:48.948888  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:48.948992  342550 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:42:48.982651  342550 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:42:48.982836  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.052116  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.041304773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.052225  342550 docker.go:318] overlay module found
	I1026 08:42:49.055376  342550 out.go:179] * Using the docker driver based on existing profile
	I1026 08:42:49.058272  342550 start.go:305] selected driver: docker
	I1026 08:42:49.058291  342550 start.go:925] validating driver "docker" against &{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.058453  342550 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:42:49.058555  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.113827  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.10402828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.114262  342550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:42:49.114296  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:49.114371  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:49.114423  342550 start.go:349] cluster config:
	{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.119386  342550 out.go:179] * Starting "ha-232402" primary control-plane node in "ha-232402" cluster
	I1026 08:42:49.122223  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:49.125135  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:49.127883  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:49.127936  342550 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:42:49.127964  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:49.127976  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:49.128054  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:49.128065  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:49.128205  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.148213  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:49.148234  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:49.148247  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:49.148277  342550 start.go:360] acquireMachinesLock for ha-232402: {Name:mkd235a265416fa355dec74b5ac56d04d491256e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:49.148333  342550 start.go:364] duration metric: took 39.081µs to acquireMachinesLock for "ha-232402"
	I1026 08:42:49.148353  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:49.148358  342550 fix.go:54] fixHost starting: 
	I1026 08:42:49.148604  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.166112  342550 fix.go:112] recreateIfNeeded on ha-232402: state=Stopped err=<nil>
	W1026 08:42:49.166154  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:49.169342  342550 out.go:252] * Restarting existing docker container for "ha-232402" ...
	I1026 08:42:49.169424  342550 cli_runner.go:164] Run: docker start ha-232402
	I1026 08:42:49.418525  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.441545  342550 kic.go:430] container "ha-232402" state is running.
	I1026 08:42:49.441931  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:49.465537  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.465781  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:49.465856  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:49.483751  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:49.484066  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:49.484076  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:49.484629  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55242->127.0.0.1:33180: read: connection reset by peer
	I1026 08:42:52.642170  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.642200  342550 ubuntu.go:182] provisioning hostname "ha-232402"
	I1026 08:42:52.642273  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.660229  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.660550  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.660567  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402 && echo "ha-232402" | sudo tee /etc/hostname
	I1026 08:42:52.820313  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.820402  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.840800  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.841134  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.841160  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:42:52.990861  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:42:52.990892  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:42:52.990914  342550 ubuntu.go:190] setting up certificates
	I1026 08:42:52.990924  342550 provision.go:84] configureAuth start
	I1026 08:42:52.990990  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:53.009824  342550 provision.go:143] copyHostCerts
	I1026 08:42:53.009871  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.009906  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:42:53.009927  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.010020  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:42:53.010118  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010140  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:42:53.010145  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010179  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:42:53.010234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010255  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:42:53.010265  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010300  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:42:53.010365  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402 san=[127.0.0.1 192.168.49.2 ha-232402 localhost minikube]
	I1026 08:42:54.039767  342550 provision.go:177] copyRemoteCerts
	I1026 08:42:54.039841  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:42:54.039881  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.058074  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.162887  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:42:54.162960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1026 08:42:54.182166  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:42:54.182225  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:42:54.200141  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:42:54.200208  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:42:54.218057  342550 provision.go:87] duration metric: took 1.227107421s to configureAuth
	I1026 08:42:54.218140  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:42:54.218410  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:54.218534  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.236086  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:54.236409  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:54.236427  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:42:54.568914  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:42:54.568937  342550 machine.go:96] duration metric: took 5.103139338s to provisionDockerMachine
	I1026 08:42:54.568948  342550 start.go:293] postStartSetup for "ha-232402" (driver="docker")
	I1026 08:42:54.568959  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:42:54.569025  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:42:54.569071  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.593317  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.698695  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:42:54.702088  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:42:54.702117  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:42:54.702129  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:42:54.702512  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:42:54.702614  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:42:54.702623  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:42:54.702789  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:42:54.713617  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:54.730927  342550 start.go:296] duration metric: took 161.96257ms for postStartSetup
	I1026 08:42:54.731067  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:42:54.731128  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.748393  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.851766  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:42:54.857035  342550 fix.go:56] duration metric: took 5.708668211s for fixHost
	I1026 08:42:54.857061  342550 start.go:83] releasing machines lock for "ha-232402", held for 5.708719658s
	I1026 08:42:54.857136  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:54.874075  342550 ssh_runner.go:195] Run: cat /version.json
	I1026 08:42:54.874138  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.874395  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:42:54.874465  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.896310  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.897209  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:55.096305  342550 ssh_runner.go:195] Run: systemctl --version
	I1026 08:42:55.103174  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:42:55.140113  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:42:55.144490  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:42:55.144568  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:42:55.152609  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:42:55.152677  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:42:55.152720  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:42:55.152774  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:42:55.168885  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:42:55.183022  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:42:55.183092  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:42:55.199361  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:42:55.212983  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:42:55.329311  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:42:55.439788  342550 docker.go:234] disabling docker service ...
	I1026 08:42:55.439882  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:42:55.455129  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:42:55.468360  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:42:55.591545  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:42:55.712355  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:42:55.725339  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:42:55.739516  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:42:55.739619  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.748984  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:42:55.749080  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.758145  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.767369  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.776548  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:42:55.784814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.794122  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.802447  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.811302  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:42:55.818789  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:42:55.826164  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:55.945131  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:42:56.073628  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:42:56.073791  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:42:56.077812  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:42:56.077890  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:42:56.081474  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:42:56.106451  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:42:56.106572  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.135851  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.170040  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:42:56.172899  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:42:56.189266  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:42:56.192940  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.202818  342550 kubeadm.go:883] updating cluster {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:42:56.202967  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:56.203031  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.242649  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.242675  342550 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:42:56.242785  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.267929  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.267952  342550 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:42:56.267962  342550 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 08:42:56.268090  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:42:56.268186  342550 ssh_runner.go:195] Run: crio config
	I1026 08:42:56.329063  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:56.329091  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:56.329119  342550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:42:56.329143  342550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-232402 NodeName:ha-232402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:42:56.329378  342550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-232402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:42:56.329404  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:42:56.329467  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:42:56.341574  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:56.341697  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:42:56.341768  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:42:56.350317  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:42:56.350440  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 08:42:56.358169  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1026 08:42:56.371463  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:42:56.384425  342550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1026 08:42:56.397225  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:42:56.410169  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:42:56.413685  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.423463  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:56.541144  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:42:56.557207  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.2
	I1026 08:42:56.557272  342550 certs.go:195] generating shared ca certs ...
	I1026 08:42:56.557303  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:56.557467  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:42:56.557541  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:42:56.557576  342550 certs.go:257] generating profile certs ...
	I1026 08:42:56.557692  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:42:56.557760  342550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea
	I1026 08:42:56.557782  342550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1026 08:42:57.202922  342550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea ...
	I1026 08:42:57.202955  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea: {Name:mk933c6500306ddc2c8fa2cedfd5052423ec2536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203128  342550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea ...
	I1026 08:42:57.203144  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea: {Name:mkf5c2bd5c725d62808b0af7cfa80f3d97af9f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203241  342550 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt
	I1026 08:42:57.204200  342550 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key
	I1026 08:42:57.204356  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:42:57.204376  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:42:57.204394  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:42:57.204414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:42:57.204432  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:42:57.204452  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:42:57.204471  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:42:57.204482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:42:57.204496  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:42:57.204543  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:42:57.204577  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:42:57.204589  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:42:57.204613  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:42:57.204639  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:42:57.204664  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:42:57.204710  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:57.204740  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.204757  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.204770  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.205388  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:42:57.231752  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:42:57.264536  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:42:57.295902  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:42:57.324874  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:42:57.356420  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:42:57.393782  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:42:57.430094  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:42:57.476853  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:42:57.514216  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:42:57.542038  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:42:57.573718  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:42:57.596671  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:42:57.604302  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:42:57.620193  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624096  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624163  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.684171  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:42:57.692726  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:42:57.703409  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709875  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709939  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.761720  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:42:57.770155  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:42:57.782379  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786510  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786589  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.842092  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:42:57.850459  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:42:57.854127  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:42:57.922143  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:42:57.991084  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:42:58.032484  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:42:58.075471  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:42:58.119880  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:42:58.162522  342550 kubeadm.go:400] StartCluster: {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:58.162655  342550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:42:58.162737  342550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:42:58.219487  342550 cri.go:89] found id: "b61c82cad7fbfa81b5335ff117e6fd6ed77be750be18b2795baad05c04597be3"
	I1026 08:42:58.219510  342550 cri.go:89] found id: "1c8917dd6e25dfe8420b3a3b324ba48edc068e4197ed8c758044d6818d9f3ba7"
	I1026 08:42:58.219516  342550 cri.go:89] found id: "7a416fdc86cf67bda0bfabac32d527db13c8586bd8ae683896061d13e70b3bf2"
	I1026 08:42:58.219520  342550 cri.go:89] found id: "f20afdb6dc9568c5fef5900fd16550aaeceaace97af19ff784772913a96da43b"
	I1026 08:42:58.219523  342550 cri.go:89] found id: "1902c617979ded8ef7430e8c9f9735ce1b420b6259bcc8d54001ef6f37f1fd3f"
	I1026 08:42:58.219526  342550 cri.go:89] found id: ""
	I1026 08:42:58.219576  342550 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:42:58.231211  342550 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:42:58Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:42:58.231293  342550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:42:58.239815  342550 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:42:58.239836  342550 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:42:58.239895  342550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:42:58.252247  342550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:58.252648  342550 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-232402" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.252758  342550 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "ha-232402" cluster setting kubeconfig missing "ha-232402" context setting]
	I1026 08:42:58.253044  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.253554  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:42:58.254045  342550 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:42:58.254065  342550 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:42:58.254095  342550 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:42:58.254103  342550 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:42:58.254108  342550 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:42:58.254472  342550 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1026 08:42:58.256702  342550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:42:58.269972  342550 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1026 08:42:58.269997  342550 kubeadm.go:601] duration metric: took 30.154432ms to restartPrimaryControlPlane
	I1026 08:42:58.270006  342550 kubeadm.go:402] duration metric: took 107.493524ms to StartCluster
	I1026 08:42:58.270028  342550 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270094  342550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.270678  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270895  342550 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:42:58.270923  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:42:58.270932  342550 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:42:58.271445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.276967  342550 out.go:179] * Enabled addons: 
	I1026 08:42:58.279988  342550 addons.go:514] duration metric: took 9.042438ms for enable addons: enabled=[]
	I1026 08:42:58.280034  342550 start.go:246] waiting for cluster config update ...
	I1026 08:42:58.280044  342550 start.go:255] writing updated cluster config ...
	I1026 08:42:58.283287  342550 out.go:203] 
	I1026 08:42:58.286419  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.286541  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.289808  342550 out.go:179] * Starting "ha-232402-m02" control-plane node in "ha-232402" cluster
	I1026 08:42:58.292646  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:58.295642  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:58.298397  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:58.298422  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:58.298528  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:58.298543  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:58.298666  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.298902  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:58.334398  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:58.334424  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:58.334438  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:58.334461  342550 start.go:360] acquireMachinesLock for ha-232402-m02: {Name:mkcee86299772a936378440a31e878294fbfa9f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:58.334510  342550 start.go:364] duration metric: took 34.667µs to acquireMachinesLock for "ha-232402-m02"
	I1026 08:42:58.334530  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:58.334535  342550 fix.go:54] fixHost starting: m02
	I1026 08:42:58.334809  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.368471  342550 fix.go:112] recreateIfNeeded on ha-232402-m02: state=Stopped err=<nil>
	W1026 08:42:58.368496  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:58.371679  342550 out.go:252] * Restarting existing docker container for "ha-232402-m02" ...
	I1026 08:42:58.371767  342550 cli_runner.go:164] Run: docker start ha-232402-m02
	I1026 08:42:58.772810  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.801152  342550 kic.go:430] container "ha-232402-m02" state is running.
	I1026 08:42:58.801522  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:42:58.832989  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.833245  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:58.833311  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:42:58.867008  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:58.867344  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:42:58.867353  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:58.868022  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:02.066423  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.066511  342550 ubuntu.go:182] provisioning hostname "ha-232402-m02"
	I1026 08:43:02.066610  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.100484  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.100810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.100821  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m02 && echo "ha-232402-m02" | sudo tee /etc/hostname
	I1026 08:43:02.308004  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.308166  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.334891  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.335210  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.335226  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:02.514818  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:02.514905  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:02.514937  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:02.514979  342550 provision.go:84] configureAuth start
	I1026 08:43:02.515065  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:02.560373  342550 provision.go:143] copyHostCerts
	I1026 08:43:02.560414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560461  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:02.560470  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560546  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:02.560626  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560643  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:02.560648  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560672  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:02.560715  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560731  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:02.560735  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560758  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:02.560803  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m02 san=[127.0.0.1 192.168.49.3 ha-232402-m02 localhost minikube]
	I1026 08:43:03.208517  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:03.208589  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:03.208637  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.226696  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.338996  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:03.339064  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:43:03.364234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:03.364299  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:03.392294  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:03.392357  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:03.425649  342550 provision.go:87] duration metric: took 910.644183ms to configureAuth
	I1026 08:43:03.425677  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:03.425959  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:03.426065  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.458884  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:03.459198  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:03.459218  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:03.839944  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:03.839966  342550 machine.go:96] duration metric: took 5.006711527s to provisionDockerMachine
	I1026 08:43:03.839977  342550 start.go:293] postStartSetup for "ha-232402-m02" (driver="docker")
	I1026 08:43:03.839988  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:03.840046  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:03.840113  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.857989  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.966802  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:03.970325  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:03.970356  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:03.970368  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:03.970455  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:03.970594  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:03.970609  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:03.970707  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:03.978929  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:04.001526  342550 start.go:296] duration metric: took 161.533931ms for postStartSetup
	I1026 08:43:04.001644  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:04.001711  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.029362  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.147606  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:04.158657  342550 fix.go:56] duration metric: took 5.824113305s for fixHost
	I1026 08:43:04.158679  342550 start.go:83] releasing machines lock for "ha-232402-m02", held for 5.824161494s
	I1026 08:43:04.158852  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:04.190337  342550 out.go:179] * Found network options:
	I1026 08:43:04.193487  342550 out.go:179]   - NO_PROXY=192.168.49.2
	W1026 08:43:04.196584  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:04.196654  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:04.196729  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:04.196774  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.197012  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:04.197069  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.241682  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.251119  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.602534  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:04.612399  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:04.612470  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:04.625469  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:04.625494  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:04.625529  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:04.625585  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:04.650032  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:04.672644  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:04.672717  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:04.691930  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:04.713738  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:04.895936  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:05.091815  342550 docker.go:234] disabling docker service ...
	I1026 08:43:05.091890  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:05.117939  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:05.141552  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:05.385159  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:05.717724  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:05.754449  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:05.787254  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:05.787365  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.812135  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:05.812208  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.833814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.869621  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.895385  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:05.916665  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.945670  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.979261  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:06.007406  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:06.024152  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:06.048022  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:06.407451  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:43:07.762107  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.354620144s)
	I1026 08:43:07.762151  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:43:07.762206  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:43:07.766031  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:43:07.766103  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:43:07.769733  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:43:07.814809  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:43:07.814907  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.866941  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.921153  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:43:07.924118  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:43:07.927047  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:43:07.969779  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:43:07.973594  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:07.987191  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:43:07.987445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:07.987717  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:43:08.008779  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:43:08.009283  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.3
	I1026 08:43:08.009300  342550 certs.go:195] generating shared ca certs ...
	I1026 08:43:08.009316  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:43:08.009468  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:43:08.009524  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:43:08.009531  342550 certs.go:257] generating profile certs ...
	I1026 08:43:08.009619  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:43:08.009879  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.fae769c1
	I1026 08:43:08.009932  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:43:08.009943  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:43:08.009956  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:43:08.009967  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:43:08.009979  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:43:08.009990  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:43:08.010002  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:43:08.010014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:43:08.010024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:43:08.010077  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:43:08.010105  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:43:08.010112  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:43:08.010135  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:43:08.010156  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:43:08.010177  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:43:08.010236  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:08.010266  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.010279  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.010289  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.010370  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:43:08.032241  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:43:08.139306  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:43:08.144038  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:43:08.155846  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:43:08.160324  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:43:08.170065  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:43:08.174060  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:43:08.188168  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:43:08.192073  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:43:08.200629  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:43:08.205998  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:43:08.216901  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:43:08.221162  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:43:08.231147  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:43:08.250111  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:43:08.269251  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:43:08.288444  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:43:08.306389  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:43:08.325763  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:43:08.345171  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:43:08.363276  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:43:08.388034  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:43:08.407557  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:43:08.426288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:43:08.445629  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:43:08.459889  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:43:08.474059  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:43:08.487641  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:43:08.501076  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:43:08.514660  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:43:08.530178  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:43:08.543653  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:43:08.551337  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:43:08.559877  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563863  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563978  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.606128  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:43:08.614418  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:43:08.622608  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626862  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626984  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.668441  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:43:08.678228  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:43:08.694156  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699405  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699525  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.741501  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:43:08.749451  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:43:08.753614  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:43:08.794639  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:43:08.835994  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:43:08.884952  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:43:08.929998  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:43:08.973568  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:43:09.018771  342550 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1026 08:43:09.018901  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:43:09.018964  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:43:09.019040  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:43:09.033326  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:43:09.033397  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:43:09.033460  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:43:09.042327  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:43:09.042441  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:43:09.053364  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:43:09.067913  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:43:09.083307  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:43:09.097627  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:43:09.102025  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:09.114414  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.252566  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.267980  342550 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:43:09.268336  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:09.273239  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:43:09.276128  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.414962  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.429491  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:43:09.429623  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:43:09.429932  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238867  342550 node_ready.go:49] node "ha-232402-m02" is "Ready"
	I1026 08:43:27.238899  342550 node_ready.go:38] duration metric: took 17.808924366s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238912  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:43:27.238976  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:43:27.263231  342550 api_server.go:72] duration metric: took 17.995203495s to wait for apiserver process to appear ...
	I1026 08:43:27.263257  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:43:27.263278  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.286625  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:43:27.286661  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:43:27.763965  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.797733  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:27.797765  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.264086  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.272772  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.272800  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.763318  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.773873  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.773903  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:29.263609  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:29.271856  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:43:29.272939  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:43:29.272963  342550 api_server.go:131] duration metric: took 2.009698678s to wait for apiserver health ...
	I1026 08:43:29.272972  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:43:29.286570  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:43:29.286609  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286618  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286624  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.286629  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.286634  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.286639  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.286644  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.286648  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.286659  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.286666  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.286679  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.286684  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.286690  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.286698  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.286704  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.286759  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.286774  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.286779  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.286784  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.286790  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.286797  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.286807  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.286813  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.286818  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.286824  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.286830  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.286835  342550 system_pods.go:74] duration metric: took 13.857629ms to wait for pod list to return data ...
	I1026 08:43:29.286845  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:43:29.292456  342550 default_sa.go:45] found service account: "default"
	I1026 08:43:29.292483  342550 default_sa.go:55] duration metric: took 5.6309ms for default service account to be created ...
	I1026 08:43:29.292493  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:43:29.303662  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:43:29.303699  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303711  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303717  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.303722  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.303726  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.303731  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.303736  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.303741  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.303745  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.303755  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.303760  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.303771  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.303778  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.303783  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.303788  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.303793  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.303796  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.303800  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.303804  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.303810  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.303815  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.303819  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.303823  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.303827  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.303830  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.303834  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.303840  342550 system_pods.go:126] duration metric: took 11.341628ms to wait for k8s-apps to be running ...
	I1026 08:43:29.303854  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:43:29.303908  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:43:29.323431  342550 system_svc.go:56] duration metric: took 19.574494ms WaitForService to wait for kubelet
	I1026 08:43:29.323460  342550 kubeadm.go:586] duration metric: took 20.055438295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:43:29.323478  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:43:29.333801  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333841  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333854  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333859  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333864  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333868  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333872  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333876  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333881  342550 node_conditions.go:105] duration metric: took 10.39707ms to run NodePressure ...
	I1026 08:43:29.333892  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:43:29.333919  342550 start.go:255] writing updated cluster config ...
	I1026 08:43:29.337457  342550 out.go:203] 
	I1026 08:43:29.340743  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:29.340922  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.344362  342550 out.go:179] * Starting "ha-232402-m03" control-plane node in "ha-232402" cluster
	I1026 08:43:29.348018  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:43:29.351781  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:43:29.354814  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:43:29.354918  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:43:29.354883  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:43:29.355255  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:43:29.355280  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:43:29.355447  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.375411  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:43:29.375429  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:43:29.375442  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:43:29.375466  342550 start.go:360] acquireMachinesLock for ha-232402-m03: {Name:mk956b02a4f725f23f9fb3f2ce92112bc2e1b45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:43:29.375516  342550 start.go:364] duration metric: took 35.873µs to acquireMachinesLock for "ha-232402-m03"
	I1026 08:43:29.375534  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:43:29.375540  342550 fix.go:54] fixHost starting: m03
	I1026 08:43:29.375948  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.401895  342550 fix.go:112] recreateIfNeeded on ha-232402-m03: state=Stopped err=<nil>
	W1026 08:43:29.401920  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:43:29.405493  342550 out.go:252] * Restarting existing docker container for "ha-232402-m03" ...
	I1026 08:43:29.405580  342550 cli_runner.go:164] Run: docker start ha-232402-m03
	I1026 08:43:29.812599  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.835988  342550 kic.go:430] container "ha-232402-m03" state is running.
	I1026 08:43:29.836452  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:29.866387  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.866681  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:43:29.866829  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:29.906362  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:29.906690  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:29.907638  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:43:29.908402  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:33.170636  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.170746  342550 ubuntu.go:182] provisioning hostname "ha-232402-m03"
	I1026 08:43:33.170851  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.206417  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.206830  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.206844  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m03 && echo "ha-232402-m03" | sudo tee /etc/hostname
	I1026 08:43:33.524716  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.524858  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.549504  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.549810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.549827  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:33.856044  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:33.856113  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:33.856146  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:33.856188  342550 provision.go:84] configureAuth start
	I1026 08:43:33.856287  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:33.880087  342550 provision.go:143] copyHostCerts
	I1026 08:43:33.880126  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880159  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:33.880166  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880246  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:33.880325  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880342  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:33.880346  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880369  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:33.880408  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880423  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:33.880427  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880448  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:33.880491  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m03 san=[127.0.0.1 192.168.49.4 ha-232402-m03 localhost minikube]
	I1026 08:43:34.115589  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:34.115701  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:34.115779  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.133889  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:34.307782  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:34.307842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:34.361519  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:34.361585  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:34.420419  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:34.420486  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:43:34.479633  342550 provision.go:87] duration metric: took 623.414755ms to configureAuth
	I1026 08:43:34.479699  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:34.479974  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:34.480118  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.505756  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:34.506063  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:34.506078  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:34.934452  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:34.934518  342550 machine.go:96] duration metric: took 5.067825426s to provisionDockerMachine
	I1026 08:43:34.934546  342550 start.go:293] postStartSetup for "ha-232402-m03" (driver="docker")
	I1026 08:43:34.934571  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:34.934666  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:34.934854  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.954917  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.082367  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:35.089885  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:35.090161  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:35.090176  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:35.090254  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:35.090369  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:35.090381  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:35.090546  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:35.101842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:35.125627  342550 start.go:296] duration metric: took 191.050639ms for postStartSetup
	I1026 08:43:35.125778  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:35.125843  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.147102  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.264825  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:35.271672  342550 fix.go:56] duration metric: took 5.896121251s for fixHost
	I1026 08:43:35.271696  342550 start.go:83] releasing machines lock for "ha-232402-m03", held for 5.89617159s
	I1026 08:43:35.271770  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:35.297127  342550 out.go:179] * Found network options:
	I1026 08:43:35.302967  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1026 08:43:35.306003  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306038  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306066  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306091  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:35.306177  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:35.306229  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.306517  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:35.306579  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.328577  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.334791  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.497414  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:35.553666  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:35.553760  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:35.566215  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:35.566249  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:35.566284  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:35.566344  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:35.592142  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:35.609686  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:35.609758  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:35.630610  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:35.655250  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:35.914838  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:36.134783  342550 docker.go:234] disabling docker service ...
	I1026 08:43:36.134897  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:36.155549  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:36.173043  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:36.485618  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:36.970002  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:37.017784  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:37.075903  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:37.075984  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.109912  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:37.110012  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.149021  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.175380  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.186219  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:37.221818  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.248314  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.265224  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.288935  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:37.303925  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:37.319373  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:37.587508  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:07.934759  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.347172836s)
	I1026 08:45:07.934786  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:07.934837  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:07.939538  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:07.939605  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:07.943575  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:07.968256  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:07.968338  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:07.998587  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:08.044252  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:08.047310  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:08.050469  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:08.053493  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:08.069256  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:08.074145  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:08.085961  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:08.086231  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:08.086536  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:08.111717  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:08.112059  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.4
	I1026 08:45:08.112073  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:08.112098  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:08.112222  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:08.112268  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:08.112279  342550 certs.go:257] generating profile certs ...
	I1026 08:45:08.112378  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:45:08.112451  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.aa893e80
	I1026 08:45:08.112494  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:45:08.112511  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:08.112532  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:08.112560  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:08.112589  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:08.112605  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:45:08.112627  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:45:08.112645  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:45:08.112660  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:45:08.112746  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:08.112782  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:08.112801  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:08.112842  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:08.112879  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:08.112910  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:08.112969  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:08.113008  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.113024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.113046  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.113130  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:45:08.132367  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:45:08.231029  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:45:08.235028  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:45:08.244659  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:45:08.249599  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:45:08.261474  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:45:08.266790  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:45:08.276538  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:45:08.280256  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:45:08.289634  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:45:08.293405  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:45:08.301646  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:45:08.305975  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:45:08.315022  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:08.338065  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:08.356967  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:08.380657  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:08.402274  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:45:08.422301  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:45:08.441783  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:45:08.461742  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:45:08.481814  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:08.502025  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:08.521895  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:08.542103  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:45:08.555693  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:45:08.570653  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:45:08.588674  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:45:08.602475  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:45:08.616618  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:45:08.630309  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:45:08.645565  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:08.652358  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:08.661564  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665847  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665967  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.709135  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:08.717967  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:08.727059  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731470  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731567  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.774541  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:08.784749  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:08.793805  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797757  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797878  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.841551  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:08.850068  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:08.854034  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:45:08.895708  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:45:08.942061  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:45:08.984630  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:45:09.028757  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:45:09.071885  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:45:09.113415  342550 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1026 08:45:09.113537  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:09.113588  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:45:09.113648  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:45:09.127980  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:45:09.128041  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:45:09.128109  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:09.136574  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:09.136660  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:45:09.145279  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:09.159587  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:09.174486  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:45:09.192617  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:09.196600  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:09.206757  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.371220  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.388111  342550 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:45:09.388597  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:09.391505  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:09.394393  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.549234  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.565513  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:09.565648  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:09.567107  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m03" to be "Ready" ...
	W1026 08:45:11.571085  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:13.571335  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:16.071949  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	I1026 08:45:16.573590  342550 node_ready.go:49] node "ha-232402-m03" is "Ready"
	I1026 08:45:16.573675  342550 node_ready.go:38] duration metric: took 7.00653579s for node "ha-232402-m03" to be "Ready" ...
	I1026 08:45:16.573704  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:45:16.573795  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:45:16.595522  342550 api_server.go:72] duration metric: took 7.20735956s to wait for apiserver process to appear ...
	I1026 08:45:16.595595  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:45:16.595631  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:45:16.604035  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:45:16.604987  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:45:16.605006  342550 api_server.go:131] duration metric: took 9.390023ms to wait for apiserver health ...
	I1026 08:45:16.605015  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:45:16.613936  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:45:16.614018  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.614048  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.614068  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.614088  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.614126  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.614144  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.614163  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.614182  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.614216  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.614236  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.614257  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.614277  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.614312  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.614337  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.614368  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.614385  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.614414  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.614446  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.614463  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.614481  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.614508  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.614538  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.614557  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.614577  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.614614  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.614633  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.614654  342550 system_pods.go:74] duration metric: took 9.633315ms to wait for pod list to return data ...
	I1026 08:45:16.614688  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:45:16.617833  342550 default_sa.go:45] found service account: "default"
	I1026 08:45:16.617904  342550 default_sa.go:55] duration metric: took 3.173782ms for default service account to be created ...
	I1026 08:45:16.617928  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:45:16.715675  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:45:16.715759  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.715782  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.715824  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.715843  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.715864  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.715937  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.715954  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.715984  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.716013  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.716032  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.716052  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.716092  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.716112  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.716133  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.716170  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.716191  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.716210  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.716229  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.716260  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.716280  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.716302  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.716341  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.716362  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.716380  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.716399  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.716435  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.716457  342550 system_pods.go:126] duration metric: took 98.51028ms to wait for k8s-apps to be running ...
	I1026 08:45:16.716492  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:16.716578  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:16.737535  342550 system_svc.go:56] duration metric: took 21.034459ms WaitForService to wait for kubelet
	I1026 08:45:16.737613  342550 kubeadm.go:586] duration metric: took 7.349454949s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:16.737646  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:16.742538  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742622  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742649  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742689  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742708  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742751  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742771  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742799  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742848  342550 node_conditions.go:105] duration metric: took 5.158408ms to run NodePressure ...
	I1026 08:45:16.742874  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:16.742923  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:16.748453  342550 out.go:203] 
	I1026 08:45:16.751669  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:16.751857  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.755487  342550 out.go:179] * Starting "ha-232402-m04" worker node in "ha-232402" cluster
	I1026 08:45:16.760316  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:45:16.763382  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:45:16.766507  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:45:16.766623  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:45:16.766588  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:45:16.767053  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:45:16.767077  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:45:16.767235  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.789140  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:45:16.789160  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:45:16.789172  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:45:16.789196  342550 start.go:360] acquireMachinesLock for ha-232402-m04: {Name:mk15269e9a15e15636295a3a12cc05426ca8566d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:45:16.789248  342550 start.go:364] duration metric: took 36.217µs to acquireMachinesLock for "ha-232402-m04"
	I1026 08:45:16.789267  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:45:16.789272  342550 fix.go:54] fixHost starting: m04
	I1026 08:45:16.789524  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:16.816258  342550 fix.go:112] recreateIfNeeded on ha-232402-m04: state=Stopped err=<nil>
	W1026 08:45:16.816289  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:45:16.819903  342550 out.go:252] * Restarting existing docker container for "ha-232402-m04" ...
	I1026 08:45:16.820003  342550 cli_runner.go:164] Run: docker start ha-232402-m04
	I1026 08:45:17.136467  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:17.172522  342550 kic.go:430] container "ha-232402-m04" state is running.
	I1026 08:45:17.173106  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:17.210858  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:17.211110  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:45:17.212380  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:17.248960  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:17.249254  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:17.249263  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:45:17.250106  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43382->127.0.0.1:33195: read: connection reset by peer
	I1026 08:45:20.411022  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.411054  342550 ubuntu.go:182] provisioning hostname "ha-232402-m04"
	I1026 08:45:20.411151  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.437224  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.437615  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.437634  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m04 && echo "ha-232402-m04" | sudo tee /etc/hostname
	I1026 08:45:20.606470  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.606623  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.637294  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.637715  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.637737  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:45:20.795267  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:45:20.795294  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:45:20.795316  342550 ubuntu.go:190] setting up certificates
	I1026 08:45:20.795325  342550 provision.go:84] configureAuth start
	I1026 08:45:20.795388  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:20.814347  342550 provision.go:143] copyHostCerts
	I1026 08:45:20.814401  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814441  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:45:20.814454  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814537  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:45:20.814631  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814656  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:45:20.814661  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814687  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:45:20.814798  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814828  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:45:20.814842  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814869  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:45:20.814924  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m04 san=[127.0.0.1 192.168.49.5 ha-232402-m04 localhost minikube]
	I1026 08:45:21.016159  342550 provision.go:177] copyRemoteCerts
	I1026 08:45:21.016235  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:45:21.016281  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.041440  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.148014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:45:21.148076  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:45:21.172598  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:45:21.172660  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:45:21.199069  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:45:21.199134  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:45:21.219310  342550 provision.go:87] duration metric: took 423.970968ms to configureAuth
	I1026 08:45:21.219338  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:45:21.219574  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:21.219685  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.244539  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:21.244932  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:21.244952  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:45:21.600980  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:45:21.601003  342550 machine.go:96] duration metric: took 4.389678213s to provisionDockerMachine
	I1026 08:45:21.601016  342550 start.go:293] postStartSetup for "ha-232402-m04" (driver="docker")
	I1026 08:45:21.601027  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:45:21.601089  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:45:21.601135  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.623066  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.735340  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:45:21.738667  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:45:21.738698  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:45:21.738751  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:45:21.738812  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:45:21.738908  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:45:21.738919  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:45:21.739032  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:45:21.746960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:21.766329  342550 start.go:296] duration metric: took 165.296455ms for postStartSetup
	I1026 08:45:21.766414  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:45:21.766453  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.787386  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.899980  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:45:21.907890  342550 fix.go:56] duration metric: took 5.118610324s for fixHost
	I1026 08:45:21.907917  342550 start.go:83] releasing machines lock for "ha-232402-m04", held for 5.118661688s
	I1026 08:45:21.907988  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:21.933326  342550 out.go:179] * Found network options:
	I1026 08:45:21.936320  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1026 08:45:21.940256  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940294  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940306  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940340  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940357  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940368  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:45:21.940442  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:45:21.940486  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.940766  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:45:21.940826  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.972410  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.978079  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:22.149485  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:45:22.200194  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:45:22.200337  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:45:22.209074  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:45:22.209098  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:45:22.209131  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:45:22.209180  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:45:22.227970  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:45:22.260018  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:45:22.260091  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:45:22.280501  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:45:22.296013  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:45:22.435097  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:45:22.583385  342550 docker.go:234] disabling docker service ...
	I1026 08:45:22.583454  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:45:22.599821  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:45:22.618049  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:45:22.760465  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:45:22.913374  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:45:22.930530  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:45:22.946115  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:45:22.946198  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.955712  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:45:22.955791  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.967161  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.978701  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.988107  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:45:22.999250  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.011010  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.021614  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.033901  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:45:23.047274  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:45:23.055227  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:23.187258  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:23.348936  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:23.349088  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:23.353170  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:23.353242  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:23.356804  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:23.401811  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:23.401919  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.436307  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.473208  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:23.476075  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:23.478893  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:23.481820  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1026 08:45:23.484818  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:23.504854  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:23.509411  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.519797  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:23.520052  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:23.520336  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:23.539958  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:23.540265  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.5
	I1026 08:45:23.540275  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:23.540293  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:23.540418  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:23.540465  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:23.540482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:23.540497  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:23.540515  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:23.540528  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:23.540600  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:23.540638  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:23.540660  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:23.540691  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:23.540724  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:23.540753  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:23.540804  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:23.540835  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.540850  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.540862  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.540886  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:23.560629  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:23.585421  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:23.605705  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:23.632934  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:23.654288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:23.674771  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:23.693831  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:23.700411  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:23.709558  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716080  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716173  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.758415  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:23.767708  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:23.779057  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784321  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784454  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.831578  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:23.841350  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:23.850606  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854695  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854826  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.898173  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:23.906572  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:23.910323  342550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:45:23.910364  342550 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1026 08:45:23.910446  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:23.910505  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:23.920573  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:23.920679  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1026 08:45:23.932673  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:23.947328  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:23.969163  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:23.973466  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.984606  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.155134  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.171153  342550 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1026 08:45:24.171549  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:24.174346  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:24.177303  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.343470  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.368034  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:24.368111  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:24.368387  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872447  342550 node_ready.go:49] node "ha-232402-m04" is "Ready"
	I1026 08:45:25.872476  342550 node_ready.go:38] duration metric: took 1.504072228s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872489  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:25.872631  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:25.886146  342550 system_svc.go:56] duration metric: took 13.648567ms WaitForService to wait for kubelet
	I1026 08:45:25.886178  342550 kubeadm.go:586] duration metric: took 1.714983841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:25.886197  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:25.890052  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890084  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890096  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890101  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890106  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890116  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890120  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890125  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890130  342550 node_conditions.go:105] duration metric: took 3.927915ms to run NodePressure ...
	I1026 08:45:25.890147  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:25.890180  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:25.890539  342550 ssh_runner.go:195] Run: rm -f paused
	I1026 08:45:25.897547  342550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:45:25.898046  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:45:25.914674  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921403  342550 pod_ready.go:94] pod "coredns-66bc5c9577-d4htv" is "Ready"
	I1026 08:45:25.921528  342550 pod_ready.go:86] duration metric: took 6.710293ms for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921572  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.928323  342550 pod_ready.go:94] pod "coredns-66bc5c9577-vctcf" is "Ready"
	I1026 08:45:25.928388  342550 pod_ready.go:86] duration metric: took 6.794421ms for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.931541  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938566  342550 pod_ready.go:94] pod "etcd-ha-232402" is "Ready"
	I1026 08:45:25.938593  342550 pod_ready.go:86] duration metric: took 7.022993ms for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938603  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944339  342550 pod_ready.go:94] pod "etcd-ha-232402-m02" is "Ready"
	I1026 08:45:25.944373  342550 pod_ready.go:86] duration metric: took 5.762714ms for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944383  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:26.098602  342550 request.go:683] "Waited before sending request" delay="154.1318ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.299278  342550 request.go:683] "Waited before sending request" delay="197.131159ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:26.498654  342550 request.go:683] "Waited before sending request" delay="53.17348ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.699396  342550 request.go:683] "Waited before sending request" delay="197.322103ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:27.099200  342550 request.go:683] "Waited before sending request" delay="150.305147ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:27.952681  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:30.450341  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:32.451378  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:34.951997  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:36.952338  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:38.952753  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:41.452152  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:43.951084  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:45.956575  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:48.451391  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:50.451685  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:52.950573  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:54.951442  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:45:56.952674  342550 pod_ready.go:94] pod "etcd-ha-232402-m03" is "Ready"
	I1026 08:45:56.952698  342550 pod_ready.go:86] duration metric: took 31.008309673s for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.957384  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966004  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402" is "Ready"
	I1026 08:45:56.966072  342550 pod_ready.go:86] duration metric: took 8.662888ms for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966104  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973739  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m02" is "Ready"
	I1026 08:45:56.973764  342550 pod_ready.go:86] duration metric: took 7.640413ms for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973773  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.981079  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m03" is "Ready"
	I1026 08:45:56.981103  342550 pod_ready.go:86] duration metric: took 7.323871ms for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.985549  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.145955  342550 request.go:683] "Waited before sending request" delay="160.263354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402"
	I1026 08:45:57.345448  342550 request.go:683] "Waited before sending request" delay="176.112448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402"
	I1026 08:45:57.350017  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402" is "Ready"
	I1026 08:45:57.350048  342550 pod_ready.go:86] duration metric: took 364.42267ms for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.350058  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.545478  342550 request.go:683] "Waited before sending request" delay="195.318809ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m02"
	I1026 08:45:57.746036  342550 request.go:683] "Waited before sending request" delay="196.306126ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m02"
	I1026 08:45:57.749268  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402-m02" is "Ready"
	I1026 08:45:57.749295  342550 pod_ready.go:86] duration metric: took 399.228382ms for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.749305  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.945742  342550 request.go:683] "Waited before sending request" delay="196.324022ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.145179  342550 request.go:683] "Waited before sending request" delay="195.240885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.346153  342550 request.go:683] "Waited before sending request" delay="96.402716ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.545837  342550 request.go:683] "Waited before sending request" delay="196.140702ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.946129  342550 request.go:683] "Waited before sending request" delay="192.251793ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:59.345416  342550 request.go:683] "Waited before sending request" delay="92.227487ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:59.755924  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:01.756440  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:03.756734  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:06.263222  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:08.756233  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:10.759737  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:13.262615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:15.263768  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:17.761879  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:20.266086  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:22.755536  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:24.756371  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:27.265289  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:29.756416  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:32.261261  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:34.278965  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:36.756714  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:39.255754  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:41.260562  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:43.263679  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:45.756223  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:47.762407  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:50.257781  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:52.261309  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:54.266882  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:56.756901  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:59.265385  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:01.266136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:03.755443  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:05.755740  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:07.757293  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:10.261769  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:12.263710  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:14.265412  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:16.757171  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:19.259551  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:21.267428  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:23.756993  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:26.257777  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:28.263354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:30.757700  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:33.260488  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:35.261687  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:37.266110  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:39.756367  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:41.759474  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:44.258485  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:46.259773  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:48.269451  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:50.756558  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:53.259529  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:55.261684  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:57.264250  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:59.268741  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:01.756567  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:04.263036  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:06.758354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:09.263247  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:11.263720  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:13.759643  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:16.263136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:18.762943  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:21.264362  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:23.756305  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:26.262469  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:28.265988  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:30.756780  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:33.263227  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:35.756771  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:37.759839  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:40.258685  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:42.265762  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:44.756117  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:46.757380  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:49.258967  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:51.259693  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:53.265397  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:55.755930  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:57.758343  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:00.294615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:02.756143  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:05.263920  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:07.756279  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:09.757243  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:12.261285  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:14.756800  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:16.756845  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:19.265272  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:21.756019  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:23.756649  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:25.756946  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:49:25.898241  342550 pod_ready.go:86] duration metric: took 3m28.14891381s for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:49:25.898285  342550 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1026 08:49:25.898319  342550 pod_ready.go:40] duration metric: took 4m0.000740057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:49:25.901244  342550 out.go:203] 
	W1026 08:49:25.904226  342550 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1026 08:49:25.907092  342550 out.go:203] 
	
	
	==> CRI-O <==
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.055530748Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a36f7ca6-cdd7-47c7-b863-069411fe28c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.056722414Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=25840e2c-159c-4bdd-b6c4-5f359a2f8cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.05732914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065669981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065849413Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3f6f3e26e690a8f6ac18c116a6c69eb990333d2104beb5428efb4b408a2d6f63/merged/etc/passwd: no such file or directory"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065871247Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3f6f3e26e690a8f6ac18c116a6c69eb990333d2104beb5428efb4b408a2d6f63/merged/etc/group: no such file or directory"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.067396454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.096904327Z" level=info msg="Created container ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06: kube-system/storage-provisioner/storage-provisioner" id=25840e2c-159c-4bdd-b6c4-5f359a2f8cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.098189532Z" level=info msg="Starting container: ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06" id=a500e041-7242-4266-b2f5-5e046e4b6e73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.105395036Z" level=info msg="Started container" PID=1385 containerID=ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06 description=kube-system/storage-provisioner/storage-provisioner id=a500e041-7242-4266-b2f5-5e046e4b6e73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.209773446Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.21366944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.213706044Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.213728772Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.21796936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.218006759Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.2180324Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225383933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225422383Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225447023Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.229546424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.229580385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.22960313Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.23319124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.233226121Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	ccafc56fd4a21       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c525cdee14da2       storage-provisioner                 kube-system
	d1b260f911620       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   7177eb3e88656       coredns-66bc5c9577-d4htv            kube-system
	ccbff713b36fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   c525cdee14da2       storage-provisioner                 kube-system
	3cd43960fb6f6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   c4541e801df01       kindnet-sj79h                       kube-system
	3ff518798314f       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   0004856ef0019       busybox-7b57f96db7-cm8cd            default
	7118e270a54de       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   625e3c4593d35       kube-proxy-shqnc                    kube-system
	c50ed772037e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   34fc152febd26       coredns-66bc5c9577-vctcf            kube-system
	82262d66f85eb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   2                   3f8a820509e20       kube-controller-manager-ha-232402   kube-system
	b61c82cad7fbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            1                   1f30462480195       kube-apiserver-ha-232402            kube-system
	1c8917dd6e25d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   bc75faa2b77d5       etcd-ha-232402                      kube-system
	7a416fdc86cf6       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  0                   1c8d8f22b837d       kube-vip-ha-232402                  kube-system
	f20afdb6dc956       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   3ce02f718ba79       kube-scheduler-ha-232402            kube-system
	1902c617979de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   1                   3f8a820509e20       kube-controller-manager-ha-232402   kube-system
	
	
	==> coredns [c50ed772037e681714fda2702cfabc3905954c28cc4a6de24ae74fbcfa3040ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51235 - 42254 "HINFO IN 1197954165026605269.515736649033002582. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.050793767s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d1b260f911620694e1ff384bc3dd99d793f69504fd0119df09fddd2eade05efb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40395 - 6176 "HINFO IN 2061310158999439352.5501595593806426841. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024825273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-232402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_35_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:49:15 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:49:15 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:49:15 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:49:15 +0000   Sun, 26 Oct 2025 08:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-232402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                bbe6db68-9456-4b78-bafa-19416f913215
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cm8cd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-d4htv             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vctcf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-232402                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-sj79h                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-232402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-232402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-shqnc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-232402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-232402                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m59s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-232402 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Warning  CgroupV1                 6m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m31s (x8 over 6m31s)  kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           5m51s                  node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	
	
	Name:               ha-232402-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_26T08_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:36:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-232402-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                9d9c7de9-a47e-4495-8bfe-cf6ec5e7ea66
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lb2w6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-232402-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-w4trc                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-232402-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-232402-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ldrkt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-232402-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-232402-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m27s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Warning  CgroupV1                 8m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m52s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m52s (x8 over 8m52s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m52s (x8 over 8m52s)  kubelet          Node ha-232402-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m52s (x8 over 8m52s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8m18s                  node-controller  Node ha-232402-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        7m52s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Warning  CgroupV1                 6m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m26s (x8 over 6m27s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m26s (x8 over 6m27s)  kubelet          Node ha-232402-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m26s (x8 over 6m27s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           5m51s                  node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	
	
	Name:               ha-232402-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_26T08_37_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:37:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:45:16 +0000   Sun, 26 Oct 2025 08:45:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:45:16 +0000   Sun, 26 Oct 2025 08:45:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:45:16 +0000   Sun, 26 Oct 2025 08:45:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:45:16 +0000   Sun, 26 Oct 2025 08:45:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-232402-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4fecf210-37c5-40c8-94fb-51927efb2238
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h2f8r                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-232402-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-5vhnf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-232402-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-232402-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-5d92l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-232402-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-232402-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 3m30s                  kube-proxy       
	  Normal   RegisteredNode           11m                    node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Warning  CgroupV1                 5m56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 5m56s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node ha-232402-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node ha-232402-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet          Node ha-232402-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Normal   RegisteredNode           5m51s                  node-controller  Node ha-232402-m03 event: Registered Node ha-232402-m03 in Controller
	  Normal   NodeNotReady             5m6s                   node-controller  Node ha-232402-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        4m56s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	
	
	Name:               ha-232402-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_26T08_39_13_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-232402-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e9a241a8-d572-4875-939f-43a808f4d239
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7997s       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-lx2j2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   Starting                 4m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)    kubelet          Node ha-232402-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)    kubelet          Node ha-232402-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)    kubelet          Node ha-232402-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                  node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   NodeReady                9m32s                kubelet          Node ha-232402-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m5s                 node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           5m56s                node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           5m51s                node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   NodeNotReady             5m6s                 node-controller  Node ha-232402-m04 status is now: NodeNotReady
	  Normal   Starting                 4m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m5s (x8 over 4m8s)  kubelet          Node ha-232402-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m5s (x8 over 4m8s)  kubelet          Node ha-232402-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m5s (x8 over 4m8s)  kubelet          Node ha-232402-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct26 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014214] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501900] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033459] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.999923] kauditd_printk_skb: 36 callbacks suppressed
	[Oct26 08:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 08:14] overlayfs: idmapped layers are currently not supported
	[  +0.063904] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 08:20] overlayfs: idmapped layers are currently not supported
	[ +54.744422] overlayfs: idmapped layers are currently not supported
	[Oct26 08:35] overlayfs: idmapped layers are currently not supported
	[ +38.059390] overlayfs: idmapped layers are currently not supported
	[Oct26 08:37] overlayfs: idmapped layers are currently not supported
	[Oct26 08:39] overlayfs: idmapped layers are currently not supported
	[Oct26 08:40] overlayfs: idmapped layers are currently not supported
	[Oct26 08:42] overlayfs: idmapped layers are currently not supported
	[Oct26 08:43] overlayfs: idmapped layers are currently not supported
	[ +30.554221] overlayfs: idmapped layers are currently not supported
	[Oct26 08:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1c8917dd6e25dfe8420b3a3b324ba48edc068e4197ed8c758044d6818d9f3ba7] <==
	{"level":"warn","ts":"2025-10-26T08:44:53.797389Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:44:53.797441Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:44:57.798211Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:44:57.798272Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:44:57.854945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"caa0018a645388bb","rtt":"45.483523ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:44:57.854962Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"caa0018a645388bb","rtt":"494.261µs","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:01.799799Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:01.799954Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:02.855496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"caa0018a645388bb","rtt":"494.261µs","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:02.855525Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"caa0018a645388bb","rtt":"45.483523ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:05.801628Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:05.801685Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:07.856120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"caa0018a645388bb","rtt":"45.483523ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:07.856164Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"caa0018a645388bb","rtt":"494.261µs","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:09.802763Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:09.802820Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"caa0018a645388bb","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-26T08:45:12.140368Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"caa0018a645388bb","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-26T08:45:12.140477Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:45:12.140519Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:45:12.172243Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"caa0018a645388bb","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-26T08:45:12.172366Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:45:12.240940Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:45:12.249621Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:45:12.857175Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"caa0018a645388bb","rtt":"45.483523ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:12.857094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"caa0018a645388bb","rtt":"494.261µs","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 08:49:28 up  2:31,  0 user,  load average: 0.92, 1.48, 1.67
	Linux ha-232402 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3cd43960fb6f6cf7d48ad29f24625b8433a94419c29a4ee040806279746cd882] <==
	I1026 08:48:48.207838       1 main.go:301] handling current node
	I1026 08:48:58.214799       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:48:58.214897       1 main.go:301] handling current node
	I1026 08:48:58.214920       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:48:58.214928       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	I1026 08:48:58.215087       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1026 08:48:58.215099       1 main.go:324] Node ha-232402-m03 has CIDR [10.244.2.0/24] 
	I1026 08:48:58.215161       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:48:58.215173       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:08.214830       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:49:08.214933       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	I1026 08:49:08.215085       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1026 08:49:08.215096       1 main.go:324] Node ha-232402-m03 has CIDR [10.244.2.0/24] 
	I1026 08:49:08.215149       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:49:08.215161       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:08.215214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:08.215225       1 main.go:301] handling current node
	I1026 08:49:18.208717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:18.208750       1 main.go:301] handling current node
	I1026 08:49:18.208771       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:49:18.208777       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	I1026 08:49:18.209136       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1026 08:49:18.209157       1 main.go:324] Node ha-232402-m03 has CIDR [10.244.2.0/24] 
	I1026 08:49:18.209437       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:49:18.209455       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b61c82cad7fbfa81b5335ff117e6fd6ed77be750be18b2795baad05c04597be3] <==
	W1026 08:43:27.305348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1026 08:43:27.306929       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:43:27.311228       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:43:27.324731       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:43:27.327726       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:43:27.327735       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:43:27.327906       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:43:27.327828       1 policy_source.go:240] refreshing policies
	I1026 08:43:27.328546       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:43:27.331963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:43:27.332208       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:43:27.332300       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:43:27.333361       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:43:27.337869       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:43:27.376169       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:43:27.381839       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:43:27.409979       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:43:27.447274       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1026 08:43:27.475230       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1026 08:43:27.985636       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:43:27.985716       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1026 08:43:29.320031       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1026 08:43:31.336571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:43:31.674963       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:43:31.749809       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [1902c617979ded8ef7430e8c9f9735ce1b420b6259bcc8d54001ef6f37f1fd3f] <==
	I1026 08:42:59.771115       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:43:00.463701       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1026 08:43:00.466861       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:43:00.471372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1026 08:43:00.472329       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 08:43:00.473009       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:43:00.473074       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1026 08:43:17.678266       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [82262d66f85ebbee5a088769db1d28fa6161254725e1ea9a0274c8fce8f56956] <==
	I1026 08:43:31.316012       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:43:31.328965       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 08:43:31.329058       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:43:31.329086       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:43:31.329098       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 08:43:31.329105       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:43:31.333032       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:43:31.342855       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:43:31.342958       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:43:31.343031       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:43:31.343112       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m04"
	I1026 08:43:31.343147       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402"
	I1026 08:43:31.343175       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m02"
	I1026 08:43:31.343200       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m03"
	I1026 08:43:31.348289       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:43:31.348719       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 08:43:31.363126       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:43:31.363157       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:43:31.363164       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:43:31.363300       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:43:31.383928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:43:37.810145       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-232402-m04"
	I1026 08:44:09.025477       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-695lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-695lc\": the object has been modified; please apply your changes to the latest version and try again"
	I1026 08:44:09.026276       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3c29a361-1f25-4599-8da3-746461b4ad63", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-695lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-695lc": the object has been modified; please apply your changes to the latest version and try again
	I1026 08:45:25.475444       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-232402-m04"
	
	
	==> kube-proxy [7118e270a54de9fcc61cc18590366b83ba6704ad59a67ab20e69bf4f67d17e7c] <==
	I1026 08:43:28.139143       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:43:28.237828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:43:28.343637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:43:28.343700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:43:28.343784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:43:28.391953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:43:28.392073       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:43:28.404165       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:43:28.404888       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:43:28.405610       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:43:28.410809       1 config.go:309] "Starting node config controller"
	I1026 08:43:28.410884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:43:28.410916       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:43:28.414668       1 config.go:200] "Starting service config controller"
	I1026 08:43:28.414693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:43:28.414822       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:43:28.414828       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:43:28.414840       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:43:28.414844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:43:28.515537       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:43:28.515640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:43:28.515669       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f20afdb6dc9568c5fef5900fd16550aaeceaace97af19ff784772913a96da43b] <==
	E1026 08:43:16.618576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:43:16.985155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:43:17.257539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:43:17.387040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:43:17.411010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:43:17.591287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:43:18.038931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:43:18.304409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:43:21.658558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:43:22.021409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:43:22.669346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:43:22.820402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:43:23.560606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:43:23.584722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:43:23.647879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:43:24.333740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:43:24.557029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:43:24.594195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:43:24.764686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:43:24.816471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:43:25.009097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:43:25.460229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:43:26.017357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:43:26.835015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1026 08:43:44.333513       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373344     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d84717c7-10ce-492a-9b6c-046e382f3a1e-tmp\") pod \"storage-provisioner\" (UID: \"d84717c7-10ce-492a-9b6c-046e382f3a1e\") " pod="kube-system/storage-provisioner"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373438     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6dd95fa-6eed-4b8e-bea2-deab4df77ccf-cni-cfg\") pod \"kindnet-sj79h\" (UID: \"a6dd95fa-6eed-4b8e-bea2-deab4df77ccf\") " pod="kube-system/kindnet-sj79h"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373473     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2bdb796-fd4e-4758-914f-94e4c0586c5c-xtables-lock\") pod \"kube-proxy-shqnc\" (UID: \"e2bdb796-fd4e-4758-914f-94e4c0586c5c\") " pod="kube-system/kube-proxy-shqnc"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373519     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6dd95fa-6eed-4b8e-bea2-deab4df77ccf-xtables-lock\") pod \"kindnet-sj79h\" (UID: \"a6dd95fa-6eed-4b8e-bea2-deab4df77ccf\") " pod="kube-system/kindnet-sj79h"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.407586     795 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.407625     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.415031     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-232402\" already exists" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.415073     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.418210     795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.489284     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-232402\" already exists" pod="kube-system/etcd-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.489673     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.515316     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-232402\" already exists" pod="kube-system/kube-apiserver-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.515524     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.527791     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-232402" podStartSLOduration=0.527762342 podStartE2EDuration="527.762342ms" podCreationTimestamp="2025-10-26 08:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:43:27.486864284 +0000 UTC m=+30.924180047" watchObservedRunningTime="2025-10-26 08:43:27.527762342 +0000 UTC m=+30.965078105"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.534789     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-232402\" already exists" pod="kube-system/kube-controller-manager-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.534983     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.555154     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-232402\" already exists" pod="kube-system/kube-scheduler-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.636645     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f WatchSource:0}: Error finding container 34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f: Status 404 returned error can't find the container with id 34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.671962     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd WatchSource:0}: Error finding container 625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd: Status 404 returned error can't find the container with id 625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.683134     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5 WatchSource:0}: Error finding container c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5: Status 404 returned error can't find the container with id c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.709963     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c WatchSource:0}: Error finding container c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c: Status 404 returned error can't find the container with id c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.723357     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b WatchSource:0}: Error finding container 0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b: Status 404 returned error can't find the container with id 0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b
	Oct 26 08:43:56 ha-232402 kubelet[795]: E1026 08:43:56.682053     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077\": container with ID starting with 286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077 not found: ID does not exist" containerID="286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077"
	Oct 26 08:43:56 ha-232402 kubelet[795]: I1026 08:43:56.682112     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077" err="rpc error: code = NotFound desc = could not find container \"286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077\": container with ID starting with 286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077 not found: ID does not exist"
	Oct 26 08:43:58 ha-232402 kubelet[795]: I1026 08:43:58.049670     795 scope.go:117] "RemoveContainer" containerID="ccbff713b36fcfaa4bcb0299272ff0aef6dd8a01d9a0ff88e1f7959d292d74d0"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-232402 -n ha-232402
helpers_test.go:269: (dbg) Run:  kubectl --context ha-232402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (426.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-232402" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-232402\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-232402\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-232402\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-232402
helpers_test.go:243: (dbg) docker inspect ha-232402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7",
	        "Created": "2025-10-26T08:34:55.36697254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342678,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:42:49.204063246Z",
	            "FinishedAt": "2025-10-26T08:42:48.58778224Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/hosts",
	        "LogPath": "/var/lib/docker/containers/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7-json.log",
	        "Name": "/ha-232402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-232402:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-232402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7",
	                "LowerDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/987f90e548c7a566f8e51d0a2f70a0d053e849a76f3c461b8338ea6994a7feb1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-232402",
	                "Source": "/var/lib/docker/volumes/ha-232402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-232402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-232402",
	                "name.minikube.sigs.k8s.io": "ha-232402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11edc866359e31e24dcf48093b22c87ec8e166cbe22464af2be8dced4da00649",
	            "SandboxKey": "/var/run/docker/netns/11edc866359e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-232402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:6e:fd:3d:05:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "256d72a548e060b98ca9fad9f40f3f0a50de572a247e0c2982ac187e2f8a5408",
	                    "EndpointID": "ba20b0b86725488764c95d576ab973385f59579e0c1710b1b409044428d2982b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-232402",
	                        "601e5c9ab7d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-232402 -n ha-232402
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 logs -n 25: (1.428609899s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m02 sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m02.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402-m04:/home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp testdata/cp-test.txt ha-232402-m04:/home/docker/cp-test.txt                                                             │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402-m04.txt │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402:/home/docker/cp-test_ha-232402-m04_ha-232402.txt                       │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402.txt                                                 │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m02 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ cp      │ ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m03:/home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt               │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ ssh     │ ha-232402 ssh -n ha-232402-m03 sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt                                         │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ node    │ ha-232402 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:40 UTC │
	│ node    │ ha-232402 node start m02 --alsologtostderr -v 5                                                                                      │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:40 UTC │ 26 Oct 25 08:42 UTC │
	│ node    │ ha-232402 node list --alsologtostderr -v 5                                                                                           │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │                     │
	│ stop    │ ha-232402 stop --alsologtostderr -v 5                                                                                                │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │ 26 Oct 25 08:42 UTC │
	│ start   │ ha-232402 start --wait true --alsologtostderr -v 5                                                                                   │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:42 UTC │                     │
	│ node    │ ha-232402 node list --alsologtostderr -v 5                                                                                           │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:49 UTC │                     │
	│ node    │ ha-232402 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-232402 │ jenkins │ v1.37.0 │ 26 Oct 25 08:49 UTC │ 26 Oct 25 08:49 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:42:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:42:48.917934  342550 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:42:48.918170  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918204  342550 out.go:374] Setting ErrFile to fd 2...
	I1026 08:42:48.918225  342550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:42:48.918525  342550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:42:48.918983  342550 out.go:368] Setting JSON to false
	I1026 08:42:48.919916  342550 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8719,"bootTime":1761459450,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:42:48.920018  342550 start.go:141] virtualization:  
	I1026 08:42:48.923144  342550 out.go:179] * [ha-232402] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:42:48.927011  342550 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:42:48.927093  342550 notify.go:220] Checking for updates...
	I1026 08:42:48.933001  342550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:42:48.935959  342550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:48.939045  342550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:42:48.941971  342550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:42:48.944900  342550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:42:48.948888  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:48.948992  342550 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:42:48.982651  342550 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:42:48.982836  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.052116  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.041304773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.052225  342550 docker.go:318] overlay module found
	I1026 08:42:49.055376  342550 out.go:179] * Using the docker driver based on existing profile
	I1026 08:42:49.058272  342550 start.go:305] selected driver: docker
	I1026 08:42:49.058291  342550 start.go:925] validating driver "docker" against &{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.058453  342550 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:42:49.058555  342550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:42:49.113827  342550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-26 08:42:49.10402828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:42:49.114262  342550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:42:49.114296  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:49.114371  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:49.114423  342550 start.go:349] cluster config:
	{Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:49.119386  342550 out.go:179] * Starting "ha-232402" primary control-plane node in "ha-232402" cluster
	I1026 08:42:49.122223  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:49.125135  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:49.127883  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:49.127936  342550 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 08:42:49.127964  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:49.127976  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:49.128054  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:49.128065  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:49.128205  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.148213  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:49.148234  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:49.148247  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:49.148277  342550 start.go:360] acquireMachinesLock for ha-232402: {Name:mkd235a265416fa355dec74b5ac56d04d491256e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:49.148333  342550 start.go:364] duration metric: took 39.081µs to acquireMachinesLock for "ha-232402"
	I1026 08:42:49.148353  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:49.148358  342550 fix.go:54] fixHost starting: 
	I1026 08:42:49.148604  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.166112  342550 fix.go:112] recreateIfNeeded on ha-232402: state=Stopped err=<nil>
	W1026 08:42:49.166154  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:49.169342  342550 out.go:252] * Restarting existing docker container for "ha-232402" ...
	I1026 08:42:49.169424  342550 cli_runner.go:164] Run: docker start ha-232402
	I1026 08:42:49.418525  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:42:49.441545  342550 kic.go:430] container "ha-232402" state is running.
	I1026 08:42:49.441931  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:49.465537  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:49.465781  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:49.465856  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:49.483751  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:49.484066  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:49.484076  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:49.484629  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55242->127.0.0.1:33180: read: connection reset by peer
	I1026 08:42:52.642170  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.642200  342550 ubuntu.go:182] provisioning hostname "ha-232402"
	I1026 08:42:52.642273  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.660229  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.660550  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.660567  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402 && echo "ha-232402" | sudo tee /etc/hostname
	I1026 08:42:52.820313  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402
	
	I1026 08:42:52.820402  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:52.840800  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:52.841134  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:52.841160  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:42:52.990861  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:42:52.990892  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:42:52.990914  342550 ubuntu.go:190] setting up certificates
	I1026 08:42:52.990924  342550 provision.go:84] configureAuth start
	I1026 08:42:52.990990  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:53.009824  342550 provision.go:143] copyHostCerts
	I1026 08:42:53.009871  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.009906  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:42:53.009927  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:42:53.010020  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:42:53.010118  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010140  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:42:53.010145  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:42:53.010179  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:42:53.010234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010255  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:42:53.010265  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:42:53.010300  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:42:53.010365  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402 san=[127.0.0.1 192.168.49.2 ha-232402 localhost minikube]
	I1026 08:42:54.039767  342550 provision.go:177] copyRemoteCerts
	I1026 08:42:54.039841  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:42:54.039881  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.058074  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.162887  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:42:54.162960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1026 08:42:54.182166  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:42:54.182225  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:42:54.200141  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:42:54.200208  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:42:54.218057  342550 provision.go:87] duration metric: took 1.227107421s to configureAuth
	I1026 08:42:54.218140  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:42:54.218410  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:54.218534  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.236086  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:54.236409  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1026 08:42:54.236427  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:42:54.568914  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:42:54.568937  342550 machine.go:96] duration metric: took 5.103139338s to provisionDockerMachine
	I1026 08:42:54.568948  342550 start.go:293] postStartSetup for "ha-232402" (driver="docker")
	I1026 08:42:54.568959  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:42:54.569025  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:42:54.569071  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.593317  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.698695  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:42:54.702088  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:42:54.702117  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:42:54.702129  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:42:54.702512  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:42:54.702614  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:42:54.702623  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:42:54.702789  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:42:54.713617  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:54.730927  342550 start.go:296] duration metric: took 161.96257ms for postStartSetup
	I1026 08:42:54.731067  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:42:54.731128  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.748393  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.851766  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:42:54.857035  342550 fix.go:56] duration metric: took 5.708668211s for fixHost
	I1026 08:42:54.857061  342550 start.go:83] releasing machines lock for "ha-232402", held for 5.708719658s
	I1026 08:42:54.857136  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:42:54.874075  342550 ssh_runner.go:195] Run: cat /version.json
	I1026 08:42:54.874138  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.874395  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:42:54.874465  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:42:54.896310  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:54.897209  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:42:55.096305  342550 ssh_runner.go:195] Run: systemctl --version
	I1026 08:42:55.103174  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:42:55.140113  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:42:55.144490  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:42:55.144568  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:42:55.152609  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:42:55.152677  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:42:55.152720  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:42:55.152774  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:42:55.168885  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:42:55.183022  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:42:55.183092  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:42:55.199361  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:42:55.212983  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:42:55.329311  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:42:55.439788  342550 docker.go:234] disabling docker service ...
	I1026 08:42:55.439882  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:42:55.455129  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:42:55.468360  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:42:55.591545  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:42:55.712355  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:42:55.725339  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:42:55.739516  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:42:55.739619  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.748984  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:42:55.749080  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.758145  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.767369  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.776548  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:42:55.784814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.794122  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.802447  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:42:55.811302  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:42:55.818789  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:42:55.826164  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:55.945131  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:42:56.073628  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:42:56.073791  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:42:56.077812  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:42:56.077890  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:42:56.081474  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:42:56.106451  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:42:56.106572  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.135851  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:42:56.170040  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:42:56.172899  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:42:56.189266  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:42:56.192940  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.202818  342550 kubeadm.go:883] updating cluster {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:42:56.202967  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:56.203031  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.242649  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.242675  342550 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:42:56.242785  342550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:42:56.267929  342550 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:42:56.267952  342550 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:42:56.267962  342550 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 08:42:56.268090  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:42:56.268186  342550 ssh_runner.go:195] Run: crio config
	I1026 08:42:56.329063  342550 cni.go:84] Creating CNI manager for ""
	I1026 08:42:56.329091  342550 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 08:42:56.329119  342550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:42:56.329143  342550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-232402 NodeName:ha-232402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:42:56.329378  342550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-232402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:42:56.329404  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:42:56.329467  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:42:56.341574  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:56.341697  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:42:56.341768  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:42:56.350317  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:42:56.350440  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 08:42:56.358169  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1026 08:42:56.371463  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:42:56.384425  342550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1026 08:42:56.397225  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:42:56.410169  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:42:56.413685  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:42:56.423463  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:42:56.541144  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:42:56.557207  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.2
	I1026 08:42:56.557272  342550 certs.go:195] generating shared ca certs ...
	I1026 08:42:56.557303  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:56.557467  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:42:56.557541  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:42:56.557576  342550 certs.go:257] generating profile certs ...
	I1026 08:42:56.557692  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:42:56.557760  342550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea
	I1026 08:42:56.557782  342550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1026 08:42:57.202922  342550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea ...
	I1026 08:42:57.202955  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea: {Name:mk933c6500306ddc2c8fa2cedfd5052423ec2536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203128  342550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea ...
	I1026 08:42:57.203144  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea: {Name:mkf5c2bd5c725d62808b0af7cfa80f3d97af9f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:57.203241  342550 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt
	I1026 08:42:57.204200  342550 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.3caca7ea -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key
	I1026 08:42:57.204356  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:42:57.204376  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:42:57.204394  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:42:57.204414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:42:57.204432  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:42:57.204452  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:42:57.204471  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:42:57.204482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:42:57.204496  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:42:57.204543  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:42:57.204577  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:42:57.204589  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:42:57.204613  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:42:57.204639  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:42:57.204664  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:42:57.204710  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:42:57.204740  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.204757  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.204770  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.205388  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:42:57.231752  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:42:57.264536  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:42:57.295902  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:42:57.324874  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:42:57.356420  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:42:57.393782  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:42:57.430094  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:42:57.476853  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:42:57.514216  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:42:57.542038  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:42:57.573718  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:42:57.596671  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:42:57.604302  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:42:57.620193  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624096  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.624163  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:42:57.684171  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:42:57.692726  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:42:57.703409  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709875  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.709939  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:42:57.761720  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:42:57.770155  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:42:57.782379  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786510  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.786589  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:42:57.842092  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:42:57.850459  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:42:57.854127  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:42:57.922143  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:42:57.991084  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:42:58.032484  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:42:58.075471  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:42:58.119880  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:42:58.162522  342550 kubeadm.go:400] StartCluster: {Name:ha-232402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:42:58.162655  342550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:42:58.162737  342550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:42:58.219487  342550 cri.go:89] found id: "b61c82cad7fbfa81b5335ff117e6fd6ed77be750be18b2795baad05c04597be3"
	I1026 08:42:58.219510  342550 cri.go:89] found id: "1c8917dd6e25dfe8420b3a3b324ba48edc068e4197ed8c758044d6818d9f3ba7"
	I1026 08:42:58.219516  342550 cri.go:89] found id: "7a416fdc86cf67bda0bfabac32d527db13c8586bd8ae683896061d13e70b3bf2"
	I1026 08:42:58.219520  342550 cri.go:89] found id: "f20afdb6dc9568c5fef5900fd16550aaeceaace97af19ff784772913a96da43b"
	I1026 08:42:58.219523  342550 cri.go:89] found id: "1902c617979ded8ef7430e8c9f9735ce1b420b6259bcc8d54001ef6f37f1fd3f"
	I1026 08:42:58.219526  342550 cri.go:89] found id: ""
	I1026 08:42:58.219576  342550 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:42:58.231211  342550 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:42:58Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:42:58.231293  342550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:42:58.239815  342550 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:42:58.239836  342550 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:42:58.239895  342550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:42:58.252247  342550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:42:58.252648  342550 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-232402" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.252758  342550 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "ha-232402" cluster setting kubeconfig missing "ha-232402" context setting]
	I1026 08:42:58.253044  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.253554  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:42:58.254045  342550 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:42:58.254065  342550 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:42:58.254095  342550 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:42:58.254103  342550 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:42:58.254108  342550 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:42:58.254472  342550 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1026 08:42:58.256702  342550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:42:58.269972  342550 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1026 08:42:58.269997  342550 kubeadm.go:601] duration metric: took 30.154432ms to restartPrimaryControlPlane
	I1026 08:42:58.270006  342550 kubeadm.go:402] duration metric: took 107.493524ms to StartCluster
	I1026 08:42:58.270028  342550 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270094  342550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:42:58.270678  342550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:42:58.270895  342550 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:42:58.270923  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:42:58.270932  342550 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:42:58.271445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.276967  342550 out.go:179] * Enabled addons: 
	I1026 08:42:58.279988  342550 addons.go:514] duration metric: took 9.042438ms for enable addons: enabled=[]
	I1026 08:42:58.280034  342550 start.go:246] waiting for cluster config update ...
	I1026 08:42:58.280044  342550 start.go:255] writing updated cluster config ...
	I1026 08:42:58.283287  342550 out.go:203] 
	I1026 08:42:58.286419  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:42:58.286541  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.289808  342550 out.go:179] * Starting "ha-232402-m02" control-plane node in "ha-232402" cluster
	I1026 08:42:58.292646  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:42:58.295642  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:42:58.298397  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:42:58.298422  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:42:58.298528  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:42:58.298543  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:42:58.298666  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.298902  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:42:58.334398  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:42:58.334424  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:42:58.334438  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:42:58.334461  342550 start.go:360] acquireMachinesLock for ha-232402-m02: {Name:mkcee86299772a936378440a31e878294fbfa9f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:42:58.334510  342550 start.go:364] duration metric: took 34.667µs to acquireMachinesLock for "ha-232402-m02"
	I1026 08:42:58.334530  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:42:58.334535  342550 fix.go:54] fixHost starting: m02
	I1026 08:42:58.334809  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.368471  342550 fix.go:112] recreateIfNeeded on ha-232402-m02: state=Stopped err=<nil>
	W1026 08:42:58.368496  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:42:58.371679  342550 out.go:252] * Restarting existing docker container for "ha-232402-m02" ...
	I1026 08:42:58.371767  342550 cli_runner.go:164] Run: docker start ha-232402-m02
	I1026 08:42:58.772810  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:42:58.801152  342550 kic.go:430] container "ha-232402-m02" state is running.
	I1026 08:42:58.801522  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:42:58.832989  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:42:58.833245  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:42:58.833311  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:42:58.867008  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:42:58.867344  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:42:58.867353  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:42:58.868022  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:02.066423  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.066511  342550 ubuntu.go:182] provisioning hostname "ha-232402-m02"
	I1026 08:43:02.066610  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.100484  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.100810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.100821  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m02 && echo "ha-232402-m02" | sudo tee /etc/hostname
	I1026 08:43:02.308004  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m02
	
	I1026 08:43:02.308166  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:02.334891  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:02.335210  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:02.335226  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:02.514818  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:02.514905  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:02.514937  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:02.514979  342550 provision.go:84] configureAuth start
	I1026 08:43:02.515065  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:02.560373  342550 provision.go:143] copyHostCerts
	I1026 08:43:02.560414  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560461  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:02.560470  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:02.560546  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:02.560626  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560643  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:02.560648  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:02.560672  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:02.560715  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560731  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:02.560735  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:02.560758  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:02.560803  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m02 san=[127.0.0.1 192.168.49.3 ha-232402-m02 localhost minikube]
	I1026 08:43:03.208517  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:03.208589  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:03.208637  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.226696  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.338996  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:03.339064  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:43:03.364234  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:03.364299  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:03.392294  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:03.392357  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:03.425649  342550 provision.go:87] duration metric: took 910.644183ms to configureAuth
	I1026 08:43:03.425677  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:03.425959  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:03.426065  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.458884  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:03.459198  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1026 08:43:03.459218  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:03.839944  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:03.839966  342550 machine.go:96] duration metric: took 5.006711527s to provisionDockerMachine
	I1026 08:43:03.839977  342550 start.go:293] postStartSetup for "ha-232402-m02" (driver="docker")
	I1026 08:43:03.839988  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:03.840046  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:03.840113  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:03.857989  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:03.966802  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:03.970325  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:03.970356  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:03.970368  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:03.970455  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:03.970594  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:03.970609  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:03.970707  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:03.978929  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:04.001526  342550 start.go:296] duration metric: took 161.533931ms for postStartSetup
	I1026 08:43:04.001644  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:04.001711  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.029362  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.147606  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:04.158657  342550 fix.go:56] duration metric: took 5.824113305s for fixHost
	I1026 08:43:04.158679  342550 start.go:83] releasing machines lock for "ha-232402-m02", held for 5.824161494s
	I1026 08:43:04.158852  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m02
	I1026 08:43:04.190337  342550 out.go:179] * Found network options:
	I1026 08:43:04.193487  342550 out.go:179]   - NO_PROXY=192.168.49.2
	W1026 08:43:04.196584  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:04.196654  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:04.196729  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:04.196774  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.197012  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:04.197069  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m02
	I1026 08:43:04.241682  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.251119  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m02/id_rsa Username:docker}
	I1026 08:43:04.602534  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:04.612399  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:04.612470  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:04.625469  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:04.625494  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:04.625529  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:04.625585  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:04.650032  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:04.672644  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:04.672717  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:04.691930  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:04.713738  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:04.895936  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:05.091815  342550 docker.go:234] disabling docker service ...
	I1026 08:43:05.091890  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:05.117939  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:05.141552  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:05.385159  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:05.717724  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:05.754449  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:05.787254  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:05.787365  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.812135  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:05.812208  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.833814  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.869621  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.895385  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:05.916665  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.945670  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:05.979261  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:06.007406  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:06.024152  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:06.048022  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:06.407451  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:43:07.762107  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.354620144s)
	I1026 08:43:07.762151  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:43:07.762206  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:43:07.766031  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:43:07.766103  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:43:07.769733  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:43:07.814809  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:43:07.814907  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.866941  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:43:07.921153  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:43:07.924118  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:43:07.927047  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:43:07.969779  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:43:07.973594  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:07.987191  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:43:07.987445  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:07.987717  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:43:08.008779  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:43:08.009283  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.3
	I1026 08:43:08.009300  342550 certs.go:195] generating shared ca certs ...
	I1026 08:43:08.009316  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:43:08.009468  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:43:08.009524  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:43:08.009531  342550 certs.go:257] generating profile certs ...
	I1026 08:43:08.009619  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:43:08.009879  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.fae769c1
	I1026 08:43:08.009932  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:43:08.009943  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:43:08.009956  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:43:08.009967  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:43:08.009979  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:43:08.009990  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:43:08.010002  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:43:08.010014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:43:08.010024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:43:08.010077  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:43:08.010105  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:43:08.010112  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:43:08.010135  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:43:08.010156  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:43:08.010177  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:43:08.010236  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:08.010266  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.010279  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.010289  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.010370  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:43:08.032241  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:43:08.139306  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:43:08.144038  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:43:08.155846  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:43:08.160324  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:43:08.170065  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:43:08.174060  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:43:08.188168  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:43:08.192073  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:43:08.200629  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:43:08.205998  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:43:08.216901  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:43:08.221162  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:43:08.231147  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:43:08.250111  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:43:08.269251  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:43:08.288444  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:43:08.306389  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:43:08.325763  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:43:08.345171  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:43:08.363276  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:43:08.388034  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:43:08.407557  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:43:08.426288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:43:08.445629  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:43:08.459889  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:43:08.474059  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:43:08.487641  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:43:08.501076  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:43:08.514660  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:43:08.530178  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:43:08.543653  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:43:08.551337  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:43:08.559877  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563863  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.563978  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:43:08.606128  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:43:08.614418  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:43:08.622608  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626862  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.626984  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:43:08.668441  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:43:08.678228  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:43:08.694156  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699405  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.699525  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:43:08.741501  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:43:08.749451  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:43:08.753614  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:43:08.794639  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:43:08.835994  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:43:08.884952  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:43:08.929998  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:43:08.973568  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:43:09.018771  342550 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1026 08:43:09.018901  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:43:09.018964  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:43:09.019040  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:43:09.033326  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:43:09.033397  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:43:09.033460  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:43:09.042327  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:43:09.042441  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:43:09.053364  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:43:09.067913  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:43:09.083307  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:43:09.097627  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:43:09.102025  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:43:09.114414  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.252566  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.267980  342550 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:43:09.268336  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:09.273239  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:43:09.276128  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:09.414962  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:43:09.429491  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:43:09.429623  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:43:09.429932  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238867  342550 node_ready.go:49] node "ha-232402-m02" is "Ready"
	I1026 08:43:27.238899  342550 node_ready.go:38] duration metric: took 17.808924366s for node "ha-232402-m02" to be "Ready" ...
	I1026 08:43:27.238912  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:43:27.238976  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:43:27.263231  342550 api_server.go:72] duration metric: took 17.995203495s to wait for apiserver process to appear ...
	I1026 08:43:27.263257  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:43:27.263278  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.286625  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:43:27.286661  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:43:27.763965  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:27.797733  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:27.797765  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.264086  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.272772  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.272800  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:28.763318  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:28.773873  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:43:28.773903  342550 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:43:29.263609  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:43:29.271856  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:43:29.272939  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:43:29.272963  342550 api_server.go:131] duration metric: took 2.009698678s to wait for apiserver health ...
	I1026 08:43:29.272972  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:43:29.286570  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:43:29.286609  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286618  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.286624  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.286629  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.286634  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.286639  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.286644  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.286648  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.286659  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.286666  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.286679  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.286684  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.286690  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.286698  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.286704  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.286759  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.286774  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.286779  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.286784  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.286790  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.286797  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.286807  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.286813  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.286818  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.286824  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.286830  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.286835  342550 system_pods.go:74] duration metric: took 13.857629ms to wait for pod list to return data ...
	I1026 08:43:29.286845  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:43:29.292456  342550 default_sa.go:45] found service account: "default"
	I1026 08:43:29.292483  342550 default_sa.go:55] duration metric: took 5.6309ms for default service account to be created ...
	I1026 08:43:29.292493  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:43:29.303662  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:43:29.303699  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303711  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:43:29.303717  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:43:29.303722  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:43:29.303726  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:43:29.303731  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:43:29.303736  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:43:29.303741  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:43:29.303745  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:43:29.303755  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:43:29.303760  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:43:29.303771  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:43:29.303778  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:43:29.303783  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:43:29.303788  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:43:29.303793  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:43:29.303796  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:43:29.303800  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:43:29.303804  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:43:29.303810  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:43:29.303815  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:43:29.303819  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:43:29.303823  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:43:29.303827  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:43:29.303830  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:43:29.303834  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:43:29.303840  342550 system_pods.go:126] duration metric: took 11.341628ms to wait for k8s-apps to be running ...
	I1026 08:43:29.303854  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:43:29.303908  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:43:29.323431  342550 system_svc.go:56] duration metric: took 19.574494ms WaitForService to wait for kubelet
	I1026 08:43:29.323460  342550 kubeadm.go:586] duration metric: took 20.055438295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:43:29.323478  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:43:29.333801  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333841  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333854  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333859  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333864  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333868  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333872  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:43:29.333876  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:43:29.333881  342550 node_conditions.go:105] duration metric: took 10.39707ms to run NodePressure ...
	I1026 08:43:29.333892  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:43:29.333919  342550 start.go:255] writing updated cluster config ...
	I1026 08:43:29.337457  342550 out.go:203] 
	I1026 08:43:29.340743  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:29.340922  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.344362  342550 out.go:179] * Starting "ha-232402-m03" control-plane node in "ha-232402" cluster
	I1026 08:43:29.348018  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:43:29.351781  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:43:29.354814  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:43:29.354918  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:43:29.354883  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:43:29.355255  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:43:29.355280  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:43:29.355447  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.375411  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:43:29.375429  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:43:29.375442  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:43:29.375466  342550 start.go:360] acquireMachinesLock for ha-232402-m03: {Name:mk956b02a4f725f23f9fb3f2ce92112bc2e1b45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:43:29.375516  342550 start.go:364] duration metric: took 35.873µs to acquireMachinesLock for "ha-232402-m03"
	I1026 08:43:29.375534  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:43:29.375540  342550 fix.go:54] fixHost starting: m03
	I1026 08:43:29.375948  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.401895  342550 fix.go:112] recreateIfNeeded on ha-232402-m03: state=Stopped err=<nil>
	W1026 08:43:29.401920  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:43:29.405493  342550 out.go:252] * Restarting existing docker container for "ha-232402-m03" ...
	I1026 08:43:29.405580  342550 cli_runner.go:164] Run: docker start ha-232402-m03
	I1026 08:43:29.812599  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:43:29.835988  342550 kic.go:430] container "ha-232402-m03" state is running.
	I1026 08:43:29.836452  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:29.866387  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:43:29.866681  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:43:29.866829  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:29.906362  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:29.906690  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:29.907638  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:43:29.908402  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:43:33.170636  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.170746  342550 ubuntu.go:182] provisioning hostname "ha-232402-m03"
	I1026 08:43:33.170851  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.206417  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.206830  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.206844  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m03 && echo "ha-232402-m03" | sudo tee /etc/hostname
	I1026 08:43:33.524716  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m03
	
	I1026 08:43:33.524858  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:33.549504  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:33.549810  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:33.549827  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:33.856044  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:33.856113  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:43:33.856146  342550 ubuntu.go:190] setting up certificates
	I1026 08:43:33.856188  342550 provision.go:84] configureAuth start
	I1026 08:43:33.856287  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:33.880087  342550 provision.go:143] copyHostCerts
	I1026 08:43:33.880126  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880159  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:43:33.880166  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:43:33.880246  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:43:33.880325  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880342  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:43:33.880346  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:43:33.880369  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:43:33.880408  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880423  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:43:33.880427  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:43:33.880448  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:43:33.880491  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m03 san=[127.0.0.1 192.168.49.4 ha-232402-m03 localhost minikube]
	I1026 08:43:34.115589  342550 provision.go:177] copyRemoteCerts
	I1026 08:43:34.115701  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:34.115779  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.133889  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:34.307782  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:43:34.307842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:34.361519  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:43:34.361585  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:43:34.420419  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:43:34.420486  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:43:34.479633  342550 provision.go:87] duration metric: took 623.414755ms to configureAuth
	I1026 08:43:34.479699  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:43:34.479974  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:43:34.480118  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.505756  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:34.506063  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1026 08:43:34.506078  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:34.934452  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:34.934518  342550 machine.go:96] duration metric: took 5.067825426s to provisionDockerMachine
	I1026 08:43:34.934546  342550 start.go:293] postStartSetup for "ha-232402-m03" (driver="docker")
	I1026 08:43:34.934571  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:34.934666  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:34.934854  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:34.954917  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.082367  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:35.089885  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:43:35.090161  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:43:35.090176  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:43:35.090254  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:43:35.090369  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:43:35.090381  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:43:35.090546  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:35.101842  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:43:35.125627  342550 start.go:296] duration metric: took 191.050639ms for postStartSetup
	I1026 08:43:35.125778  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:43:35.125843  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.147102  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.264825  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:43:35.271672  342550 fix.go:56] duration metric: took 5.896121251s for fixHost
	I1026 08:43:35.271696  342550 start.go:83] releasing machines lock for "ha-232402-m03", held for 5.89617159s
	I1026 08:43:35.271770  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:43:35.297127  342550 out.go:179] * Found network options:
	I1026 08:43:35.302967  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1026 08:43:35.306003  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306038  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306066  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:43:35.306091  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:43:35.306177  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:35.306229  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.306517  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:35.306579  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:43:35.328577  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.334791  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:43:35.497414  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:35.553666  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:35.553760  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:35.566215  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:43:35.566249  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:43:35.566284  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:43:35.566344  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:35.592142  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:35.609686  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:35.609758  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:35.630610  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:35.655250  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:35.914838  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:36.134783  342550 docker.go:234] disabling docker service ...
	I1026 08:43:36.134897  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:36.155549  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:36.173043  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:36.485618  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:36.970002  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:37.017784  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:37.075903  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:43:37.075984  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.109912  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:37.110012  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.149021  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.175380  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.186219  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:43:37.221818  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.248314  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.265224  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:37.288935  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:43:37.303925  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:43:37.319373  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:43:37.587508  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:07.934759  342550 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.347172836s)
	I1026 08:45:07.934786  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:07.934837  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:07.939538  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:07.939605  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:07.943575  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:07.968256  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:07.968338  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:07.998587  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:08.044252  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:08.047310  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:08.050469  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:08.053493  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:08.069256  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:08.074145  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:08.085961  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:08.086231  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:08.086536  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:08.111717  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:08.112059  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.4
	I1026 08:45:08.112073  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:08.112098  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:08.112222  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:08.112268  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:08.112279  342550 certs.go:257] generating profile certs ...
	I1026 08:45:08.112378  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key
	I1026 08:45:08.112451  342550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key.aa893e80
	I1026 08:45:08.112494  342550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key
	I1026 08:45:08.112511  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:08.112532  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:08.112560  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:08.112589  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:08.112605  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 08:45:08.112627  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 08:45:08.112645  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 08:45:08.112660  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 08:45:08.112746  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:08.112782  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:08.112801  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:08.112842  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:08.112879  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:08.112910  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:08.112969  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:08.113008  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.113024  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.113046  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.113130  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:45:08.132367  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:45:08.231029  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 08:45:08.235028  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 08:45:08.244659  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 08:45:08.249599  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 08:45:08.261474  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 08:45:08.266790  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 08:45:08.276538  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 08:45:08.280256  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1026 08:45:08.289634  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 08:45:08.293405  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 08:45:08.301646  342550 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 08:45:08.305975  342550 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 08:45:08.315022  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:08.338065  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:08.356967  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:08.380657  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:08.402274  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:45:08.422301  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:45:08.441783  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:45:08.461742  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:45:08.481814  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:08.502025  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:08.521895  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:08.542103  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 08:45:08.555693  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 08:45:08.570653  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 08:45:08.588674  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1026 08:45:08.602475  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 08:45:08.616618  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 08:45:08.630309  342550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 08:45:08.645565  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:08.652358  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:08.661564  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665847  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.665967  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:08.709135  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:08.717967  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:08.727059  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731470  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.731567  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:08.774541  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:08.784749  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:08.793805  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797757  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.797878  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:08.841551  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:08.850068  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:08.854034  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:45:08.895708  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:45:08.942061  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:45:08.984630  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:45:09.028757  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:45:09.071885  342550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:45:09.113415  342550 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1026 08:45:09.113537  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:09.113588  342550 kube-vip.go:115] generating kube-vip config ...
	I1026 08:45:09.113648  342550 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1026 08:45:09.127980  342550 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:45:09.128041  342550 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 08:45:09.128109  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:09.136574  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:09.136660  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 08:45:09.145279  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:09.159587  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:09.174486  342550 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1026 08:45:09.192617  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:09.196600  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:09.206757  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.371220  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.388111  342550 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:45:09.388597  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:09.391505  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:09.394393  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:09.549234  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:09.565513  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:09.565648  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:09.567107  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m03" to be "Ready" ...
	W1026 08:45:11.571085  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:13.571335  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	W1026 08:45:16.071949  342550 node_ready.go:57] node "ha-232402-m03" has "Ready":"Unknown" status (will retry)
	I1026 08:45:16.573590  342550 node_ready.go:49] node "ha-232402-m03" is "Ready"
	I1026 08:45:16.573675  342550 node_ready.go:38] duration metric: took 7.00653579s for node "ha-232402-m03" to be "Ready" ...
	I1026 08:45:16.573704  342550 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:45:16.573795  342550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:45:16.595522  342550 api_server.go:72] duration metric: took 7.20735956s to wait for apiserver process to appear ...
	I1026 08:45:16.595595  342550 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:45:16.595631  342550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 08:45:16.604035  342550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 08:45:16.604987  342550 api_server.go:141] control plane version: v1.34.1
	I1026 08:45:16.605006  342550 api_server.go:131] duration metric: took 9.390023ms to wait for apiserver health ...
	I1026 08:45:16.605015  342550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:45:16.613936  342550 system_pods.go:59] 26 kube-system pods found
	I1026 08:45:16.614018  342550 system_pods.go:61] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.614048  342550 system_pods.go:61] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.614068  342550 system_pods.go:61] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.614088  342550 system_pods.go:61] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.614126  342550 system_pods.go:61] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.614144  342550 system_pods.go:61] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.614163  342550 system_pods.go:61] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.614182  342550 system_pods.go:61] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.614216  342550 system_pods.go:61] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.614236  342550 system_pods.go:61] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.614257  342550 system_pods.go:61] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.614277  342550 system_pods.go:61] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.614312  342550 system_pods.go:61] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.614337  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.614368  342550 system_pods.go:61] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.614385  342550 system_pods.go:61] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.614414  342550 system_pods.go:61] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.614446  342550 system_pods.go:61] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.614463  342550 system_pods.go:61] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.614481  342550 system_pods.go:61] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.614508  342550 system_pods.go:61] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.614538  342550 system_pods.go:61] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.614557  342550 system_pods.go:61] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.614577  342550 system_pods.go:61] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.614614  342550 system_pods.go:61] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.614633  342550 system_pods.go:61] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.614654  342550 system_pods.go:74] duration metric: took 9.633315ms to wait for pod list to return data ...
	I1026 08:45:16.614688  342550 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:45:16.617833  342550 default_sa.go:45] found service account: "default"
	I1026 08:45:16.617904  342550 default_sa.go:55] duration metric: took 3.173782ms for default service account to be created ...
	I1026 08:45:16.617928  342550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:45:16.715675  342550 system_pods.go:86] 26 kube-system pods found
	I1026 08:45:16.715759  342550 system_pods.go:89] "coredns-66bc5c9577-d4htv" [e2cbf7be-1683-4697-a498-ecec7490c6cb] Running
	I1026 08:45:16.715782  342550 system_pods.go:89] "coredns-66bc5c9577-vctcf" [62957a9a-cde7-48bc-819a-f66c1d0c490b] Running
	I1026 08:45:16.715824  342550 system_pods.go:89] "etcd-ha-232402" [0496ec7d-4c76-4e8d-9e1c-74ae0b1f1015] Running
	I1026 08:45:16.715843  342550 system_pods.go:89] "etcd-ha-232402-m02" [acc19fb4-5e0b-461a-b91d-8a6d6c6db95a] Running
	I1026 08:45:16.715864  342550 system_pods.go:89] "etcd-ha-232402-m03" [8eece287-26b3-4e2c-9ac1-4d9cafd05dd1] Running
	I1026 08:45:16.715937  342550 system_pods.go:89] "kindnet-5vhnf" [6e990dca-3856-470c-873f-07531a8611ea] Running
	I1026 08:45:16.715954  342550 system_pods.go:89] "kindnet-7997s" [8e688cf6-28f9-48f5-9d08-7402ab7d5de0] Running
	I1026 08:45:16.715984  342550 system_pods.go:89] "kindnet-sj79h" [a6dd95fa-6eed-4b8e-bea2-deab4df77ccf] Running
	I1026 08:45:16.716013  342550 system_pods.go:89] "kindnet-w4trc" [9b92417c-97ee-4708-99a8-6631d29c30cd] Running
	I1026 08:45:16.716032  342550 system_pods.go:89] "kube-apiserver-ha-232402" [71356f8d-b35f-485a-b45f-85590a0c2c7a] Running
	I1026 08:45:16.716052  342550 system_pods.go:89] "kube-apiserver-ha-232402-m02" [81c4f4d5-9bbd-473a-bb0d-b2ce193bcd4e] Running
	I1026 08:45:16.716092  342550 system_pods.go:89] "kube-apiserver-ha-232402-m03" [6647436f-97c5-4767-8bb2-8301b73e9c46] Running
	I1026 08:45:16.716112  342550 system_pods.go:89] "kube-controller-manager-ha-232402" [546812fb-154a-4973-b304-f26883aede0f] Running
	I1026 08:45:16.716133  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m02" [51b737c6-dc76-4696-b0b2-f0ccc11208f9] Running
	I1026 08:45:16.716170  342550 system_pods.go:89] "kube-controller-manager-ha-232402-m03" [ea3731d1-0dbf-40d5-9440-d8155833a000] Running
	I1026 08:45:16.716191  342550 system_pods.go:89] "kube-proxy-5d92l" [d054a79c-6f87-4272-93a5-5df7e09ffc09] Running
	I1026 08:45:16.716210  342550 system_pods.go:89] "kube-proxy-ldrkt" [0a931610-2273-4af2-9930-c4b377ef5eb6] Running
	I1026 08:45:16.716229  342550 system_pods.go:89] "kube-proxy-lx2j2" [fe1eb1a0-a097-4b98-a8ed-b685b0afba94] Running
	I1026 08:45:16.716260  342550 system_pods.go:89] "kube-proxy-shqnc" [e2bdb796-fd4e-4758-914f-94e4c0586c5c] Running
	I1026 08:45:16.716280  342550 system_pods.go:89] "kube-scheduler-ha-232402" [ab2f9548-9f99-4e10-9932-fa0b0aa367d4] Running
	I1026 08:45:16.716302  342550 system_pods.go:89] "kube-scheduler-ha-232402-m02" [82ec57ec-c5c6-478c-8620-fa55cefa4f71] Running
	I1026 08:45:16.716341  342550 system_pods.go:89] "kube-scheduler-ha-232402-m03" [e04fa4b5-5bcc-4eff-9df4-cc3efdee0bbd] Running
	I1026 08:45:16.716362  342550 system_pods.go:89] "kube-vip-ha-232402" [c26e77cb-ac9a-4469-9a4b-6f1ad759e770] Running
	I1026 08:45:16.716380  342550 system_pods.go:89] "kube-vip-ha-232402-m02" [6cf9bdec-55d0-4256-be29-1ec5dfe274d1] Running
	I1026 08:45:16.716399  342550 system_pods.go:89] "kube-vip-ha-232402-m03" [fd0cde91-be62-43e1-8d93-8b7278231e57] Running
	I1026 08:45:16.716435  342550 system_pods.go:89] "storage-provisioner" [d84717c7-10ce-492a-9b6c-046e382f3a1e] Running
	I1026 08:45:16.716457  342550 system_pods.go:126] duration metric: took 98.51028ms to wait for k8s-apps to be running ...
	I1026 08:45:16.716492  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:16.716578  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:16.737535  342550 system_svc.go:56] duration metric: took 21.034459ms WaitForService to wait for kubelet
	I1026 08:45:16.737613  342550 kubeadm.go:586] duration metric: took 7.349454949s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:16.737646  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:16.742538  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742622  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742649  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742689  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742708  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742751  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742771  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:16.742799  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:16.742848  342550 node_conditions.go:105] duration metric: took 5.158408ms to run NodePressure ...
	I1026 08:45:16.742874  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:16.742923  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:16.748453  342550 out.go:203] 
	I1026 08:45:16.751669  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:16.751857  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.755487  342550 out.go:179] * Starting "ha-232402-m04" worker node in "ha-232402" cluster
	I1026 08:45:16.760316  342550 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:45:16.763382  342550 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:45:16.766507  342550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:45:16.766623  342550 cache.go:58] Caching tarball of preloaded images
	I1026 08:45:16.766588  342550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:45:16.767053  342550 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 08:45:16.767077  342550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:45:16.767235  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:16.789140  342550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:45:16.789160  342550 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:45:16.789172  342550 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:45:16.789196  342550 start.go:360] acquireMachinesLock for ha-232402-m04: {Name:mk15269e9a15e15636295a3a12cc05426ca8566d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:45:16.789248  342550 start.go:364] duration metric: took 36.217µs to acquireMachinesLock for "ha-232402-m04"
	I1026 08:45:16.789267  342550 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:45:16.789272  342550 fix.go:54] fixHost starting: m04
	I1026 08:45:16.789524  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:16.816258  342550 fix.go:112] recreateIfNeeded on ha-232402-m04: state=Stopped err=<nil>
	W1026 08:45:16.816289  342550 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:45:16.819903  342550 out.go:252] * Restarting existing docker container for "ha-232402-m04" ...
	I1026 08:45:16.820003  342550 cli_runner.go:164] Run: docker start ha-232402-m04
	I1026 08:45:17.136467  342550 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:45:17.172522  342550 kic.go:430] container "ha-232402-m04" state is running.
	I1026 08:45:17.173106  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:17.210858  342550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/config.json ...
	I1026 08:45:17.211110  342550 machine.go:93] provisionDockerMachine start ...
	I1026 08:45:17.212380  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:17.248960  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:17.249254  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:17.249263  342550 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:45:17.250106  342550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43382->127.0.0.1:33195: read: connection reset by peer
	I1026 08:45:20.411022  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.411054  342550 ubuntu.go:182] provisioning hostname "ha-232402-m04"
	I1026 08:45:20.411151  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.437224  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.437615  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.437634  342550 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-232402-m04 && echo "ha-232402-m04" | sudo tee /etc/hostname
	I1026 08:45:20.606470  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-232402-m04
	
	I1026 08:45:20.606623  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:20.637294  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:20.637715  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:20.637737  342550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-232402-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-232402-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-232402-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:45:20.795267  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:45:20.795294  342550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 08:45:20.795316  342550 ubuntu.go:190] setting up certificates
	I1026 08:45:20.795325  342550 provision.go:84] configureAuth start
	I1026 08:45:20.795388  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:20.814347  342550 provision.go:143] copyHostCerts
	I1026 08:45:20.814401  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814441  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 08:45:20.814454  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 08:45:20.814537  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 08:45:20.814631  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814656  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 08:45:20.814661  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 08:45:20.814687  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 08:45:20.814798  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814828  342550 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 08:45:20.814842  342550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 08:45:20.814869  342550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 08:45:20.814924  342550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.ha-232402-m04 san=[127.0.0.1 192.168.49.5 ha-232402-m04 localhost minikube]
	I1026 08:45:21.016159  342550 provision.go:177] copyRemoteCerts
	I1026 08:45:21.016235  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:45:21.016281  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.041440  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.148014  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 08:45:21.148076  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:45:21.172598  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 08:45:21.172660  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:45:21.199069  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 08:45:21.199134  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:45:21.219310  342550 provision.go:87] duration metric: took 423.970968ms to configureAuth
	I1026 08:45:21.219338  342550 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:45:21.219574  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:21.219685  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.244539  342550 main.go:141] libmachine: Using SSH client type: native
	I1026 08:45:21.244932  342550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1026 08:45:21.244952  342550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:45:21.600980  342550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:45:21.601003  342550 machine.go:96] duration metric: took 4.389678213s to provisionDockerMachine
	I1026 08:45:21.601016  342550 start.go:293] postStartSetup for "ha-232402-m04" (driver="docker")
	I1026 08:45:21.601027  342550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:45:21.601089  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:45:21.601135  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.623066  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.735340  342550 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:45:21.738667  342550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:45:21.738698  342550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:45:21.738751  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 08:45:21.738812  342550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 08:45:21.738908  342550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 08:45:21.738919  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /etc/ssl/certs/2954752.pem
	I1026 08:45:21.739032  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:45:21.746960  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:21.766329  342550 start.go:296] duration metric: took 165.296455ms for postStartSetup
	I1026 08:45:21.766414  342550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:45:21.766453  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.787386  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.899980  342550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:45:21.907890  342550 fix.go:56] duration metric: took 5.118610324s for fixHost
	I1026 08:45:21.907917  342550 start.go:83] releasing machines lock for "ha-232402-m04", held for 5.118661688s
	I1026 08:45:21.907988  342550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:45:21.933326  342550 out.go:179] * Found network options:
	I1026 08:45:21.936320  342550 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1026 08:45:21.940256  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940294  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940306  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940340  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940357  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	W1026 08:45:21.940368  342550 proxy.go:120] fail to check proxy env: Error ip not in block
	I1026 08:45:21.940442  342550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:45:21.940486  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.940766  342550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:45:21.940826  342550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:45:21.972410  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:21.978079  342550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:45:22.149485  342550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:45:22.200194  342550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:45:22.200337  342550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:45:22.209074  342550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:45:22.209098  342550 start.go:495] detecting cgroup driver to use...
	I1026 08:45:22.209131  342550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 08:45:22.209180  342550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:45:22.227970  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:45:22.260018  342550 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:45:22.260091  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:45:22.280501  342550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:45:22.296013  342550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:45:22.435097  342550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:45:22.583385  342550 docker.go:234] disabling docker service ...
	I1026 08:45:22.583454  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:45:22.599821  342550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:45:22.618049  342550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:45:22.760465  342550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:45:22.913374  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:45:22.930530  342550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:45:22.946115  342550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:45:22.946198  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.955712  342550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:45:22.955791  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.967161  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.978701  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:22.988107  342550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:45:22.999250  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.011010  342550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.021614  342550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:45:23.033901  342550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:45:23.047274  342550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:45:23.055227  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:23.187258  342550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:45:23.348936  342550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:45:23.349088  342550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:45:23.353170  342550 start.go:563] Will wait 60s for crictl version
	I1026 08:45:23.353242  342550 ssh_runner.go:195] Run: which crictl
	I1026 08:45:23.356804  342550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:45:23.401811  342550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:45:23.401919  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.436307  342550 ssh_runner.go:195] Run: crio --version
	I1026 08:45:23.473208  342550 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:45:23.476075  342550 out.go:179]   - env NO_PROXY=192.168.49.2
	I1026 08:45:23.478893  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1026 08:45:23.481820  342550 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1026 08:45:23.484818  342550 cli_runner.go:164] Run: docker network inspect ha-232402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:45:23.504854  342550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 08:45:23.509411  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.519797  342550 mustload.go:65] Loading cluster: ha-232402
	I1026 08:45:23.520052  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:23.520336  342550 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:45:23.539958  342550 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:45:23.540265  342550 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402 for IP: 192.168.49.5
	I1026 08:45:23.540275  342550 certs.go:195] generating shared ca certs ...
	I1026 08:45:23.540293  342550 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:45:23.540418  342550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 08:45:23.540465  342550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 08:45:23.540482  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 08:45:23.540497  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 08:45:23.540515  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 08:45:23.540528  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 08:45:23.540600  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 08:45:23.540638  342550 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 08:45:23.540660  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:45:23.540691  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:45:23.540724  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:45:23.540753  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 08:45:23.540804  342550 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 08:45:23.540835  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem -> /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.540850  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.540862  342550 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.540886  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:45:23.560629  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 08:45:23.585421  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:45:23.605705  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:45:23.632934  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 08:45:23.654288  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 08:45:23.674771  342550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:45:23.693831  342550 ssh_runner.go:195] Run: openssl version
	I1026 08:45:23.700411  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:45:23.709558  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716080  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.716173  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:45:23.758415  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:45:23.767708  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 08:45:23.779057  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784321  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.784454  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 08:45:23.831578  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 08:45:23.841350  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 08:45:23.850606  342550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854695  342550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.854826  342550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 08:45:23.898173  342550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:45:23.906572  342550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:45:23.910323  342550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:45:23.910364  342550 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1026 08:45:23.910446  342550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-232402-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-232402 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:45:23.910505  342550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:45:23.920573  342550 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:45:23.920679  342550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1026 08:45:23.932673  342550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:45:23.947328  342550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:45:23.969163  342550 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1026 08:45:23.973466  342550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:45:23.984606  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.155134  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.171153  342550 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1026 08:45:24.171549  342550 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:45:24.174346  342550 out.go:179] * Verifying Kubernetes components...
	I1026 08:45:24.177303  342550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:45:24.343470  342550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:45:24.368034  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 08:45:24.368111  342550 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1026 08:45:24.368387  342550 node_ready.go:35] waiting up to 6m0s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872447  342550 node_ready.go:49] node "ha-232402-m04" is "Ready"
	I1026 08:45:25.872476  342550 node_ready.go:38] duration metric: took 1.504072228s for node "ha-232402-m04" to be "Ready" ...
	I1026 08:45:25.872489  342550 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:45:25.872631  342550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:45:25.886146  342550 system_svc.go:56] duration metric: took 13.648567ms WaitForService to wait for kubelet
	I1026 08:45:25.886178  342550 kubeadm.go:586] duration metric: took 1.714983841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:45:25.886197  342550 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:45:25.890052  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890084  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890096  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890101  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890106  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890116  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890120  342550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 08:45:25.890125  342550 node_conditions.go:123] node cpu capacity is 2
	I1026 08:45:25.890130  342550 node_conditions.go:105] duration metric: took 3.927915ms to run NodePressure ...
	I1026 08:45:25.890147  342550 start.go:241] waiting for startup goroutines ...
	I1026 08:45:25.890180  342550 start.go:255] writing updated cluster config ...
	I1026 08:45:25.890539  342550 ssh_runner.go:195] Run: rm -f paused
	I1026 08:45:25.897547  342550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:45:25.898046  342550 kapi.go:59] client config for ha-232402: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/ha-232402/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:45:25.914674  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921403  342550 pod_ready.go:94] pod "coredns-66bc5c9577-d4htv" is "Ready"
	I1026 08:45:25.921528  342550 pod_ready.go:86] duration metric: took 6.710293ms for pod "coredns-66bc5c9577-d4htv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.921572  342550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.928323  342550 pod_ready.go:94] pod "coredns-66bc5c9577-vctcf" is "Ready"
	I1026 08:45:25.928388  342550 pod_ready.go:86] duration metric: took 6.794421ms for pod "coredns-66bc5c9577-vctcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.931541  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938566  342550 pod_ready.go:94] pod "etcd-ha-232402" is "Ready"
	I1026 08:45:25.938593  342550 pod_ready.go:86] duration metric: took 7.022993ms for pod "etcd-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.938603  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944339  342550 pod_ready.go:94] pod "etcd-ha-232402-m02" is "Ready"
	I1026 08:45:25.944373  342550 pod_ready.go:86] duration metric: took 5.762714ms for pod "etcd-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:25.944383  342550 pod_ready.go:83] waiting for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:26.098602  342550 request.go:683] "Waited before sending request" delay="154.1318ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.299278  342550 request.go:683] "Waited before sending request" delay="197.131159ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:26.498654  342550 request.go:683] "Waited before sending request" delay="53.17348ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-232402-m03"
	I1026 08:45:26.699396  342550 request.go:683] "Waited before sending request" delay="197.322103ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:27.099200  342550 request.go:683] "Waited before sending request" delay="150.305147ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:27.952681  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:30.450341  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:32.451378  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:34.951997  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:36.952338  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:38.952753  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:41.452152  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:43.951084  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:45.956575  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:48.451391  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:50.451685  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:52.950573  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:45:54.951442  342550 pod_ready.go:104] pod "etcd-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:45:56.952674  342550 pod_ready.go:94] pod "etcd-ha-232402-m03" is "Ready"
	I1026 08:45:56.952698  342550 pod_ready.go:86] duration metric: took 31.008309673s for pod "etcd-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.957384  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966004  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402" is "Ready"
	I1026 08:45:56.966072  342550 pod_ready.go:86] duration metric: took 8.662888ms for pod "kube-apiserver-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.966104  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973739  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m02" is "Ready"
	I1026 08:45:56.973764  342550 pod_ready.go:86] duration metric: took 7.640413ms for pod "kube-apiserver-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.973773  342550 pod_ready.go:83] waiting for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.981079  342550 pod_ready.go:94] pod "kube-apiserver-ha-232402-m03" is "Ready"
	I1026 08:45:56.981103  342550 pod_ready.go:86] duration metric: took 7.323871ms for pod "kube-apiserver-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:56.985549  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.145955  342550 request.go:683] "Waited before sending request" delay="160.263354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402"
	I1026 08:45:57.345448  342550 request.go:683] "Waited before sending request" delay="176.112448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402"
	I1026 08:45:57.350017  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402" is "Ready"
	I1026 08:45:57.350048  342550 pod_ready.go:86] duration metric: took 364.42267ms for pod "kube-controller-manager-ha-232402" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.350058  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.545478  342550 request.go:683] "Waited before sending request" delay="195.318809ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m02"
	I1026 08:45:57.746036  342550 request.go:683] "Waited before sending request" delay="196.306126ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m02"
	I1026 08:45:57.749268  342550 pod_ready.go:94] pod "kube-controller-manager-ha-232402-m02" is "Ready"
	I1026 08:45:57.749295  342550 pod_ready.go:86] duration metric: took 399.228382ms for pod "kube-controller-manager-ha-232402-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.749305  342550 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:45:57.945742  342550 request.go:683] "Waited before sending request" delay="196.324022ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.145179  342550 request.go:683] "Waited before sending request" delay="195.240885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.346153  342550 request.go:683] "Waited before sending request" delay="96.402716ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-232402-m03"
	I1026 08:45:58.545837  342550 request.go:683] "Waited before sending request" delay="196.140702ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:58.946129  342550 request.go:683] "Waited before sending request" delay="192.251793ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	I1026 08:45:59.345416  342550 request.go:683] "Waited before sending request" delay="92.227487ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-232402-m03"
	W1026 08:45:59.755924  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:01.756440  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:03.756734  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:06.263222  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:08.756233  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:10.759737  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:13.262615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:15.263768  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:17.761879  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:20.266086  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:22.755536  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:24.756371  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:27.265289  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:29.756416  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:32.261261  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:34.278965  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:36.756714  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:39.255754  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:41.260562  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:43.263679  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:45.756223  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:47.762407  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:50.257781  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:52.261309  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:54.266882  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:56.756901  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:46:59.265385  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:01.266136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:03.755443  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:05.755740  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:07.757293  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:10.261769  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:12.263710  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:14.265412  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:16.757171  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:19.259551  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:21.267428  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:23.756993  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:26.257777  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:28.263354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:30.757700  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:33.260488  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:35.261687  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:37.266110  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:39.756367  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:41.759474  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:44.258485  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:46.259773  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:48.269451  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:50.756558  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:53.259529  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:55.261684  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:57.264250  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:47:59.268741  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:01.756567  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:04.263036  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:06.758354  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:09.263247  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:11.263720  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:13.759643  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:16.263136  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:18.762943  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:21.264362  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:23.756305  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:26.262469  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:28.265988  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:30.756780  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:33.263227  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:35.756771  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:37.759839  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:40.258685  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:42.265762  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:44.756117  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:46.757380  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:49.258967  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:51.259693  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:53.265397  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:55.755930  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:48:57.758343  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:00.294615  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:02.756143  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:05.263920  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:07.756279  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:09.757243  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:12.261285  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:14.756800  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:16.756845  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:19.265272  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:21.756019  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:23.756649  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	W1026 08:49:25.756946  342550 pod_ready.go:104] pod "kube-controller-manager-ha-232402-m03" is not "Ready", error: <nil>
	I1026 08:49:25.898241  342550 pod_ready.go:86] duration metric: took 3m28.14891381s for pod "kube-controller-manager-ha-232402-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:49:25.898285  342550 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1026 08:49:25.898319  342550 pod_ready.go:40] duration metric: took 4m0.000740057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:49:25.901244  342550 out.go:203] 
	W1026 08:49:25.904226  342550 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1026 08:49:25.907092  342550 out.go:203] 
	
	
	==> CRI-O <==
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.055530748Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a36f7ca6-cdd7-47c7-b863-069411fe28c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.056722414Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=25840e2c-159c-4bdd-b6c4-5f359a2f8cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.05732914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065669981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065849413Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3f6f3e26e690a8f6ac18c116a6c69eb990333d2104beb5428efb4b408a2d6f63/merged/etc/passwd: no such file or directory"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.065871247Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3f6f3e26e690a8f6ac18c116a6c69eb990333d2104beb5428efb4b408a2d6f63/merged/etc/group: no such file or directory"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.067396454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.096904327Z" level=info msg="Created container ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06: kube-system/storage-provisioner/storage-provisioner" id=25840e2c-159c-4bdd-b6c4-5f359a2f8cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.098189532Z" level=info msg="Starting container: ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06" id=a500e041-7242-4266-b2f5-5e046e4b6e73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:43:58 ha-232402 crio[662]: time="2025-10-26T08:43:58.105395036Z" level=info msg="Started container" PID=1385 containerID=ccafc56fd4a2108827ca65d4cac792ef35a3726616238488cd483658cbfcee06 description=kube-system/storage-provisioner/storage-provisioner id=a500e041-7242-4266-b2f5-5e046e4b6e73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.209773446Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.21366944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.213706044Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.213728772Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.21796936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.218006759Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.2180324Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225383933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225422383Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.225447023Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.229546424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.229580385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.22960313Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.23319124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:44:08 ha-232402 crio[662]: time="2025-10-26T08:44:08.233226121Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	ccafc56fd4a21       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c525cdee14da2       storage-provisioner                 kube-system
	d1b260f911620       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   1                   7177eb3e88656       coredns-66bc5c9577-d4htv            kube-system
	ccbff713b36fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       1                   c525cdee14da2       storage-provisioner                 kube-system
	3cd43960fb6f6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               1                   c4541e801df01       kindnet-sj79h                       kube-system
	3ff518798314f       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   1                   0004856ef0019       busybox-7b57f96db7-cm8cd            default
	7118e270a54de       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 minutes ago       Running             kube-proxy                1                   625e3c4593d35       kube-proxy-shqnc                    kube-system
	c50ed772037e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   1                   34fc152febd26       coredns-66bc5c9577-vctcf            kube-system
	82262d66f85eb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   2                   3f8a820509e20       kube-controller-manager-ha-232402   kube-system
	b61c82cad7fbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            1                   1f30462480195       kube-apiserver-ha-232402            kube-system
	1c8917dd6e25d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   bc75faa2b77d5       etcd-ha-232402                      kube-system
	7a416fdc86cf6       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  0                   1c8d8f22b837d       kube-vip-ha-232402                  kube-system
	f20afdb6dc956       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   3ce02f718ba79       kube-scheduler-ha-232402            kube-system
	1902c617979de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   1                   3f8a820509e20       kube-controller-manager-ha-232402   kube-system
	
	
	==> coredns [c50ed772037e681714fda2702cfabc3905954c28cc4a6de24ae74fbcfa3040ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51235 - 42254 "HINFO IN 1197954165026605269.515736649033002582. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.050793767s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d1b260f911620694e1ff384bc3dd99d793f69504fd0119df09fddd2eade05efb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40395 - 6176 "HINFO IN 2061310158999439352.5501595593806426841. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024825273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-232402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_35_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:49:35 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:49:35 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:49:35 +0000   Sun, 26 Oct 2025 08:35:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:49:35 +0000   Sun, 26 Oct 2025 08:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-232402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                bbe6db68-9456-4b78-bafa-19416f913215
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cm8cd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-d4htv             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vctcf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-232402                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-sj79h                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-232402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-232402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-shqnc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-232402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-232402                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m14s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-232402 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Warning  CgroupV1                 6m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m47s (x8 over 6m47s)  kubelet          Node ha-232402 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m47s (x8 over 6m47s)  kubelet          Node ha-232402 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m47s (x8 over 6m47s)  kubelet          Node ha-232402 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m12s                  node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	  Normal   RegisteredNode           6m7s                   node-controller  Node ha-232402 event: Registered Node ha-232402 in Controller
	
	
	Name:               ha-232402-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_26T08_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:36:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:44:56 +0000   Sun, 26 Oct 2025 08:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-232402-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                9d9c7de9-a47e-4495-8bfe-cf6ec5e7ea66
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lb2w6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-232402-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-w4trc                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-232402-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-232402-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ldrkt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-232402-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-232402-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Warning  CgroupV1                 9m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m8s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m8s (x8 over 9m8s)    kubelet          Node ha-232402-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m8s (x8 over 9m8s)    kubelet          Node ha-232402-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m8s (x8 over 9m8s)    kubelet          Node ha-232402-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8m34s                  node-controller  Node ha-232402-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        8m8s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Warning  CgroupV1                 6m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m42s (x8 over 6m43s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m42s (x8 over 6m43s)  kubelet          Node ha-232402-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m42s (x8 over 6m43s)  kubelet          Node ha-232402-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m12s                  node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	  Normal   RegisteredNode           6m7s                   node-controller  Node ha-232402-m02 event: Registered Node ha-232402-m02 in Controller
	
	
	Name:               ha-232402-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-232402-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=ha-232402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_26T08_39_13_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-232402-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:49:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:49:09 +0000   Sun, 26 Oct 2025 08:45:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-232402-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e9a241a8-d572-4875-939f-43a808f4d239
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-898c9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kindnet-7997s               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-lx2j2            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 4m15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node ha-232402-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet          Node ha-232402-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node ha-232402-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   NodeReady                9m48s                  kubelet          Node ha-232402-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           6m12s                  node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   RegisteredNode           6m7s                   node-controller  Node ha-232402-m04 event: Registered Node ha-232402-m04 in Controller
	  Normal   NodeNotReady             5m22s                  node-controller  Node ha-232402-m04 status is now: NodeNotReady
	  Normal   Starting                 4m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m21s (x8 over 4m24s)  kubelet          Node ha-232402-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m21s (x8 over 4m24s)  kubelet          Node ha-232402-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m21s (x8 over 4m24s)  kubelet          Node ha-232402-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct26 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014214] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501900] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033459] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.999923] kauditd_printk_skb: 36 callbacks suppressed
	[Oct26 08:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 08:14] overlayfs: idmapped layers are currently not supported
	[  +0.063904] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 08:20] overlayfs: idmapped layers are currently not supported
	[ +54.744422] overlayfs: idmapped layers are currently not supported
	[Oct26 08:35] overlayfs: idmapped layers are currently not supported
	[ +38.059390] overlayfs: idmapped layers are currently not supported
	[Oct26 08:37] overlayfs: idmapped layers are currently not supported
	[Oct26 08:39] overlayfs: idmapped layers are currently not supported
	[Oct26 08:40] overlayfs: idmapped layers are currently not supported
	[Oct26 08:42] overlayfs: idmapped layers are currently not supported
	[Oct26 08:43] overlayfs: idmapped layers are currently not supported
	[ +30.554221] overlayfs: idmapped layers are currently not supported
	[Oct26 08:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1c8917dd6e25dfe8420b3a3b324ba48edc068e4197ed8c758044d6818d9f3ba7] <==
	{"level":"info","ts":"2025-10-26T08:45:12.249621Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:45:12.857175Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"caa0018a645388bb","rtt":"45.483523ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:45:12.857094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"caa0018a645388bb","rtt":"494.261µs","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-26T08:49:33.512138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:48502","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.4:48502: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-26T08:49:33.559218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:49:33.602094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:48526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:49:33.644335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(3049339064415197536 12593026477526642892)"}
	{"level":"info","ts":"2025-10-26T08:49:33.654604Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"caa0018a645388bb","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-26T08:49:33.654665Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.654978Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:49:33.655000Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.655309Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:49:33.655340Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.656504Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.656566Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2025-10-26T08:49:33.656769Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.691065Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb","error":"context canceled"}
	{"level":"warn","ts":"2025-10-26T08:49:33.691126Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"caa0018a645388bb","error":"failed to read caa0018a645388bb on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-10-26T08:49:33.691147Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.691235Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb","error":"context canceled"}
	{"level":"info","ts":"2025-10-26T08:49:33.691249Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:49:33.691257Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"caa0018a645388bb"}
	{"level":"info","ts":"2025-10-26T08:49:33.691272Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"caa0018a645388bb"}
	{"level":"warn","ts":"2025-10-26T08:49:33.696851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:39240","server-name":"","error":"read tcp 192.168.49.2:2380->192.168.49.4:39240: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-26T08:49:33.696933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:39254","server-name":"","error":"read tcp 192.168.49.2:2380->192.168.49.4:39254: read: connection reset by peer"}
	
	
	==> kernel <==
	 08:49:43 up  2:32,  0 user,  load average: 0.80, 1.42, 1.65
	Linux ha-232402 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3cd43960fb6f6cf7d48ad29f24625b8433a94419c29a4ee040806279746cd882] <==
	I1026 08:49:08.215161       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:08.215214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:08.215225       1 main.go:301] handling current node
	I1026 08:49:18.208717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:18.208750       1 main.go:301] handling current node
	I1026 08:49:18.208771       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:49:18.208777       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	I1026 08:49:18.209136       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1026 08:49:18.209157       1 main.go:324] Node ha-232402-m03 has CIDR [10.244.2.0/24] 
	I1026 08:49:18.209437       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:49:18.209455       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:28.207734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:28.207769       1 main.go:301] handling current node
	I1026 08:49:28.207787       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:49:28.207794       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	I1026 08:49:28.207917       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1026 08:49:28.207924       1 main.go:324] Node ha-232402-m03 has CIDR [10.244.2.0/24] 
	I1026 08:49:28.207970       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:49:28.207975       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:38.208182       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1026 08:49:38.208299       1 main.go:324] Node ha-232402-m04 has CIDR [10.244.3.0/24] 
	I1026 08:49:38.208491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:49:38.208510       1 main.go:301] handling current node
	I1026 08:49:38.208526       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1026 08:49:38.208531       1 main.go:324] Node ha-232402-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b61c82cad7fbfa81b5335ff117e6fd6ed77be750be18b2795baad05c04597be3] <==
	W1026 08:43:27.305348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1026 08:43:27.306929       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:43:27.311228       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:43:27.324731       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:43:27.327726       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:43:27.327735       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:43:27.327906       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:43:27.327828       1 policy_source.go:240] refreshing policies
	I1026 08:43:27.328546       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:43:27.331963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:43:27.332208       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:43:27.332300       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:43:27.333361       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:43:27.337869       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:43:27.376169       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:43:27.381839       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:43:27.409979       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:43:27.447274       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1026 08:43:27.475230       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1026 08:43:27.985636       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:43:27.985716       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1026 08:43:29.320031       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1026 08:43:31.336571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:43:31.674963       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:43:31.749809       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [1902c617979ded8ef7430e8c9f9735ce1b420b6259bcc8d54001ef6f37f1fd3f] <==
	I1026 08:42:59.771115       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:43:00.463701       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1026 08:43:00.466861       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:43:00.471372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1026 08:43:00.472329       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 08:43:00.473009       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:43:00.473074       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1026 08:43:17.678266       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [82262d66f85ebbee5a088769db1d28fa6161254725e1ea9a0274c8fce8f56956] <==
	I1026 08:43:31.329058       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:43:31.329086       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:43:31.329098       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 08:43:31.329105       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:43:31.333032       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:43:31.342855       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:43:31.342958       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:43:31.343031       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:43:31.343112       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m04"
	I1026 08:43:31.343147       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402"
	I1026 08:43:31.343175       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m02"
	I1026 08:43:31.343200       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-232402-m03"
	I1026 08:43:31.348289       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:43:31.348719       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 08:43:31.363126       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:43:31.363157       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:43:31.363164       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:43:31.363300       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:43:31.383928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:43:37.810145       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-232402-m04"
	I1026 08:44:09.025477       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-695lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-695lc\": the object has been modified; please apply your changes to the latest version and try again"
	I1026 08:44:09.026276       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3c29a361-1f25-4599-8da3-746461b4ad63", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-695lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-695lc": the object has been modified; please apply your changes to the latest version and try again
	I1026 08:45:25.475444       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-232402-m04"
	I1026 08:49:35.986093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-232402-m04"
	E1026 08:49:36.136900       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-232402-m03\", UID:\"8cf30fde-1ec2-4628-878d-3d7c2822055a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-232402-m03\", UID:\"0f357e26-5971-44e9-a518-b7728c6dd33a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-232402-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [7118e270a54de9fcc61cc18590366b83ba6704ad59a67ab20e69bf4f67d17e7c] <==
	I1026 08:43:28.139143       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:43:28.237828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:43:28.343637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:43:28.343700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 08:43:28.343784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:43:28.391953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:43:28.392073       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:43:28.404165       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:43:28.404888       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:43:28.405610       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:43:28.410809       1 config.go:309] "Starting node config controller"
	I1026 08:43:28.410884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:43:28.410916       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:43:28.414668       1 config.go:200] "Starting service config controller"
	I1026 08:43:28.414693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:43:28.414822       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:43:28.414828       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:43:28.414840       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:43:28.414844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:43:28.515537       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:43:28.515640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:43:28.515669       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f20afdb6dc9568c5fef5900fd16550aaeceaace97af19ff784772913a96da43b] <==
	E1026 08:43:17.411010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:43:17.591287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:43:18.038931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:43:18.304409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:43:21.658558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:43:22.021409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:43:22.669346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:43:22.820402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:43:23.560606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:43:23.584722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:43:23.647879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:43:24.333740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:43:24.557029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:43:24.594195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:43:24.764686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:43:24.816471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:43:25.009097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:43:25.460229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:43:26.017357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 08:43:26.835015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1026 08:43:44.333513       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1026 08:49:30.193869       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-898c9\": pod busybox-7b57f96db7-898c9 is already assigned to node \"ha-232402-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-898c9" node="ha-232402-m04"
	E1026 08:49:30.194120       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 447c721a-c621-4022-b1d5-d058fe54d327(default/busybox-7b57f96db7-898c9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-898c9"
	E1026 08:49:30.194285       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-898c9\": pod busybox-7b57f96db7-898c9 is already assigned to node \"ha-232402-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-898c9"
	I1026 08:49:30.204830       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-898c9" node="ha-232402-m04"
	
	
	==> kubelet <==
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373344     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d84717c7-10ce-492a-9b6c-046e382f3a1e-tmp\") pod \"storage-provisioner\" (UID: \"d84717c7-10ce-492a-9b6c-046e382f3a1e\") " pod="kube-system/storage-provisioner"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373438     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6dd95fa-6eed-4b8e-bea2-deab4df77ccf-cni-cfg\") pod \"kindnet-sj79h\" (UID: \"a6dd95fa-6eed-4b8e-bea2-deab4df77ccf\") " pod="kube-system/kindnet-sj79h"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373473     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2bdb796-fd4e-4758-914f-94e4c0586c5c-xtables-lock\") pod \"kube-proxy-shqnc\" (UID: \"e2bdb796-fd4e-4758-914f-94e4c0586c5c\") " pod="kube-system/kube-proxy-shqnc"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.373519     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6dd95fa-6eed-4b8e-bea2-deab4df77ccf-xtables-lock\") pod \"kindnet-sj79h\" (UID: \"a6dd95fa-6eed-4b8e-bea2-deab4df77ccf\") " pod="kube-system/kindnet-sj79h"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.407586     795 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.407625     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.415031     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-232402\" already exists" pod="kube-system/kube-vip-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.415073     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.418210     795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.489284     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-232402\" already exists" pod="kube-system/etcd-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.489673     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.515316     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-232402\" already exists" pod="kube-system/kube-apiserver-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.515524     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.527791     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-232402" podStartSLOduration=0.527762342 podStartE2EDuration="527.762342ms" podCreationTimestamp="2025-10-26 08:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:43:27.486864284 +0000 UTC m=+30.924180047" watchObservedRunningTime="2025-10-26 08:43:27.527762342 +0000 UTC m=+30.965078105"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.534789     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-232402\" already exists" pod="kube-system/kube-controller-manager-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: I1026 08:43:27.534983     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: E1026 08:43:27.555154     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-232402\" already exists" pod="kube-system/kube-scheduler-ha-232402"
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.636645     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f WatchSource:0}: Error finding container 34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f: Status 404 returned error can't find the container with id 34fc152febd26be6f9b2aed88197c0dca8ec426c0bd76d03686a7417bb745c5f
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.671962     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd WatchSource:0}: Error finding container 625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd: Status 404 returned error can't find the container with id 625e3c4593d35047e5acfb6e38ce32ad3ade32537eb3e25bfad3edce77a485bd
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.683134     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5 WatchSource:0}: Error finding container c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5: Status 404 returned error can't find the container with id c525cdee14da2715525a929e49d08835077697db7fb325b71be72d7b5e68c6e5
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.709963     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c WatchSource:0}: Error finding container c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c: Status 404 returned error can't find the container with id c4541e801df01ee1c08057b276e232f08c5ed1408522a457faec80c9a3a56d0c
	Oct 26 08:43:27 ha-232402 kubelet[795]: W1026 08:43:27.723357     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio-0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b WatchSource:0}: Error finding container 0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b: Status 404 returned error can't find the container with id 0004856ef0019873220ddbce07a325ca01de3447de1441db6840aeb8b304037b
	Oct 26 08:43:56 ha-232402 kubelet[795]: E1026 08:43:56.682053     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077\": container with ID starting with 286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077 not found: ID does not exist" containerID="286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077"
	Oct 26 08:43:56 ha-232402 kubelet[795]: I1026 08:43:56.682112     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077" err="rpc error: code = NotFound desc = could not find container \"286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077\": container with ID starting with 286b404cb76869de890a7a5675c965de08f041b611806f2c150681be4566c077 not found: ID does not exist"
	Oct 26 08:43:58 ha-232402 kubelet[795]: I1026 08:43:58.049670     795 scope.go:117] "RemoveContainer" containerID="ccbff713b36fcfaa4bcb0299272ff0aef6dd8a01d9a0ff88e1f7959d292d74d0"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-232402 -n ha-232402
helpers_test.go:269: (dbg) Run:  kubectl --context ha-232402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.41s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-284707 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-284707 --output=json --user=testUser: exit status 80 (1.761843647s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1194c226-4abe-442d-9bf5-b892a47936ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-284707 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0d8b1d8b-0831-4260-b0f2-453e71aa5fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T08:54:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"ddb76fb7-f303-49d8-93c9-bae5d6de65d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-284707 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.18s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-284707 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-284707 --output=json --user=testUser: exit status 80 (2.175929181s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"164dc63c-e71d-424f-bc8c-739744c951ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-284707 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"74f30fbc-d91a-4025-b135-4f85eb9228c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T08:54:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ab765834-ca18-4910-b8f8-585239da575f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-284707 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (551.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.446245957s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-275732
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-275732: (1.557459793s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-275732 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-275732 status --format={{.Host}}: exit status 7 (77.635004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.153630002s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-275732 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (144.383567ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-275732] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-275732
	    minikube start -p kubernetes-upgrade-275732 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2757322 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-275732 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 105 (7m40.614543817s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-275732] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-275732" primary control-plane node in "kubernetes-upgrade-275732" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:13:50.414378  445201 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:13:50.414618  445201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:13:50.414648  445201 out.go:374] Setting ErrFile to fd 2...
	I1026 09:13:50.414670  445201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:13:50.415061  445201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:13:50.415510  445201 out.go:368] Setting JSON to false
	I1026 09:13:50.416524  445201 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10581,"bootTime":1761459450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:13:50.416629  445201 start.go:141] virtualization:  
	I1026 09:13:50.420181  445201 out.go:179] * [kubernetes-upgrade-275732] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:13:50.427013  445201 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:13:50.427098  445201 notify.go:220] Checking for updates...
	I1026 09:13:50.431285  445201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:13:50.434382  445201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:13:50.437325  445201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:13:50.440203  445201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:13:50.443197  445201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:13:50.446643  445201 config.go:182] Loaded profile config "kubernetes-upgrade-275732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:13:50.447326  445201 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:13:50.481599  445201 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:13:50.481732  445201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:13:50.590184  445201 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:13:50.579656484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:13:50.590292  445201 docker.go:318] overlay module found
	I1026 09:13:50.596086  445201 out.go:179] * Using the docker driver based on existing profile
	I1026 09:13:50.599111  445201 start.go:305] selected driver: docker
	I1026 09:13:50.599138  445201 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-275732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-275732 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:13:50.599262  445201 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:13:50.599995  445201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:13:50.694363  445201 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:13:50.685309009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:13:50.694691  445201 cni.go:84] Creating CNI manager for ""
	I1026 09:13:50.694815  445201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:13:50.694865  445201 start.go:349] cluster config:
	{Name:kubernetes-upgrade-275732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-275732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:13:50.699490  445201 out.go:179] * Starting "kubernetes-upgrade-275732" primary control-plane node in "kubernetes-upgrade-275732" cluster
	I1026 09:13:50.702604  445201 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:13:50.705681  445201 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:13:50.708631  445201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:13:50.708711  445201 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:13:50.708730  445201 cache.go:58] Caching tarball of preloaded images
	I1026 09:13:50.708816  445201 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:13:50.708831  445201 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:13:50.708939  445201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/config.json ...
	I1026 09:13:50.709157  445201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:13:50.740868  445201 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:13:50.740893  445201 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:13:50.740908  445201 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:13:50.740937  445201 start.go:360] acquireMachinesLock for kubernetes-upgrade-275732: {Name:mke9fce24fd0439ea74f45d10af7a3be96148597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:13:50.740995  445201 start.go:364] duration metric: took 36.637µs to acquireMachinesLock for "kubernetes-upgrade-275732"
	I1026 09:13:50.741020  445201 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:13:50.741031  445201 fix.go:54] fixHost starting: 
	I1026 09:13:50.741307  445201 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-275732 --format={{.State.Status}}
	I1026 09:13:50.770985  445201 fix.go:112] recreateIfNeeded on kubernetes-upgrade-275732: state=Running err=<nil>
	W1026 09:13:50.771035  445201 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:13:50.774027  445201 out.go:252] * Updating the running docker "kubernetes-upgrade-275732" container ...
	I1026 09:13:50.774068  445201 machine.go:93] provisionDockerMachine start ...
	I1026 09:13:50.774169  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:50.800422  445201 main.go:141] libmachine: Using SSH client type: native
	I1026 09:13:50.800747  445201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33375 <nil> <nil>}
	I1026 09:13:50.800757  445201 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:13:50.974542  445201 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-275732
	
	I1026 09:13:50.974584  445201 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-275732"
	I1026 09:13:50.974683  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:51.012659  445201 main.go:141] libmachine: Using SSH client type: native
	I1026 09:13:51.012992  445201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33375 <nil> <nil>}
	I1026 09:13:51.013006  445201 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-275732 && echo "kubernetes-upgrade-275732" | sudo tee /etc/hostname
	I1026 09:13:51.196910  445201 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-275732
	
	I1026 09:13:51.197096  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:51.219044  445201 main.go:141] libmachine: Using SSH client type: native
	I1026 09:13:51.219418  445201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33375 <nil> <nil>}
	I1026 09:13:51.219441  445201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-275732' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-275732/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-275732' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:13:51.380020  445201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:13:51.380043  445201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:13:51.380078  445201 ubuntu.go:190] setting up certificates
	I1026 09:13:51.380087  445201 provision.go:84] configureAuth start
	I1026 09:13:51.380145  445201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-275732
	I1026 09:13:51.404394  445201 provision.go:143] copyHostCerts
	I1026 09:13:51.404462  445201 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:13:51.404479  445201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:13:51.404548  445201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:13:51.404649  445201 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:13:51.404654  445201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:13:51.404679  445201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:13:51.404737  445201 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:13:51.404742  445201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:13:51.404783  445201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:13:51.404844  445201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-275732 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-275732 localhost minikube]
	I1026 09:13:51.865003  445201 provision.go:177] copyRemoteCerts
	I1026 09:13:51.865124  445201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:13:51.865218  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:51.883036  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:13:51.995107  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:13:52.022443  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 09:13:52.054143  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:13:52.088062  445201 provision.go:87] duration metric: took 707.953241ms to configureAuth
	I1026 09:13:52.088139  445201 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:13:52.088361  445201 config.go:182] Loaded profile config "kubernetes-upgrade-275732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:13:52.088517  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:52.128695  445201 main.go:141] libmachine: Using SSH client type: native
	I1026 09:13:52.129001  445201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33375 <nil> <nil>}
	I1026 09:13:52.129015  445201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:13:52.860803  445201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:13:52.860832  445201 machine.go:96] duration metric: took 2.086754523s to provisionDockerMachine
	I1026 09:13:52.860843  445201 start.go:293] postStartSetup for "kubernetes-upgrade-275732" (driver="docker")
	I1026 09:13:52.860854  445201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:13:52.860932  445201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:13:52.861004  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:52.898433  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:13:53.025310  445201 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:13:53.029121  445201 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:13:53.029200  445201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:13:53.029226  445201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:13:53.029320  445201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:13:53.029467  445201 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:13:53.029623  445201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:13:53.043203  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:13:53.075775  445201 start.go:296] duration metric: took 214.915965ms for postStartSetup
	I1026 09:13:53.075935  445201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:13:53.076014  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:53.098846  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:13:53.213230  445201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:13:53.218779  445201 fix.go:56] duration metric: took 2.477741212s for fixHost
	I1026 09:13:53.218803  445201 start.go:83] releasing machines lock for "kubernetes-upgrade-275732", held for 2.47779494s
	I1026 09:13:53.218882  445201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-275732
	I1026 09:13:53.252268  445201 ssh_runner.go:195] Run: cat /version.json
	I1026 09:13:53.252345  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:53.252637  445201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:13:53.252691  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:13:53.286174  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:13:53.294040  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:13:53.525304  445201 ssh_runner.go:195] Run: systemctl --version
	I1026 09:13:53.531974  445201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:13:53.700125  445201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:13:53.727744  445201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:13:53.727838  445201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:13:53.777693  445201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:13:53.777719  445201 start.go:495] detecting cgroup driver to use...
	I1026 09:13:53.777761  445201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:13:53.777813  445201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:13:53.851404  445201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:13:53.891568  445201 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:13:53.891637  445201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:13:53.931741  445201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:13:53.973323  445201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:13:54.289271  445201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:13:54.607524  445201 docker.go:234] disabling docker service ...
	I1026 09:13:54.607589  445201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:13:54.637577  445201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:13:54.676157  445201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:13:54.996388  445201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:13:55.318270  445201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:13:55.349638  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:13:55.384688  445201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:13:55.384754  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.418174  445201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:13:55.418249  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.467432  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.489297  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.518567  445201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:13:55.527013  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.554967  445201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.592295  445201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:13:55.607988  445201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:13:55.619730  445201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:13:55.636356  445201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:13:55.940677  445201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:15:26.181796  445201 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.241088272s)
	I1026 09:15:26.181821  445201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:15:26.181881  445201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:15:26.186217  445201 start.go:563] Will wait 60s for crictl version
	I1026 09:15:26.186287  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:15:26.189946  445201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:15:26.216210  445201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:15:26.216316  445201 ssh_runner.go:195] Run: crio --version
	I1026 09:15:26.250184  445201 ssh_runner.go:195] Run: crio --version
	I1026 09:15:26.286547  445201 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:15:26.289496  445201 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-275732 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:15:26.308353  445201 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:15:26.312682  445201 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-275732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-275732 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:15:26.312822  445201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:15:26.312892  445201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:15:26.346881  445201 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:15:26.346907  445201 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:15:26.346970  445201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:15:26.376965  445201 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:15:26.376986  445201 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:15:26.376994  445201 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:15:26.377094  445201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-275732 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-275732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:15:26.377179  445201 ssh_runner.go:195] Run: crio config
	I1026 09:15:26.446733  445201 cni.go:84] Creating CNI manager for ""
	I1026 09:15:26.446807  445201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:15:26.446838  445201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:15:26.446891  445201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-275732 NodeName:kubernetes-upgrade-275732 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:15:26.447059  445201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-275732"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:15:26.447171  445201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:15:26.455522  445201 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:15:26.455634  445201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:15:26.463587  445201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1026 09:15:26.476764  445201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:15:26.490005  445201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1026 09:15:26.504903  445201 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:15:26.509073  445201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:15:26.665158  445201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:15:26.681253  445201 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732 for IP: 192.168.76.2
	I1026 09:15:26.681316  445201 certs.go:195] generating shared ca certs ...
	I1026 09:15:26.681346  445201 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:15:26.681524  445201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:15:26.681606  445201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:15:26.681640  445201 certs.go:257] generating profile certs ...
	I1026 09:15:26.681760  445201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key
	I1026 09:15:26.681846  445201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/apiserver.key.08a6687e
	I1026 09:15:26.681922  445201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/proxy-client.key
	I1026 09:15:26.682072  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:15:26.682151  445201 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:15:26.682176  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:15:26.682235  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:15:26.682291  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:15:26.682352  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:15:26.682430  445201 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:15:26.683159  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:15:26.710774  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:15:26.728805  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:15:26.753164  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:15:26.784622  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 09:15:26.815816  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:15:26.840554  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:15:26.860092  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:15:26.882123  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:15:26.902597  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:15:26.924450  445201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:15:26.945967  445201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:15:26.960963  445201 ssh_runner.go:195] Run: openssl version
	I1026 09:15:26.968038  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:15:26.979441  445201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:15:26.983691  445201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:15:26.983807  445201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:15:27.027363  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:15:27.035696  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:15:27.044487  445201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:15:27.048977  445201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:15:27.049095  445201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:15:27.091507  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:15:27.099890  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:15:27.109965  445201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:15:27.114295  445201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:15:27.114406  445201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:15:27.156896  445201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:15:27.165342  445201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:15:27.169695  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:15:27.212677  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:15:27.260001  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:15:27.302609  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:15:27.347512  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:15:27.389714  445201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:15:27.433469  445201 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-275732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-275732 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:15:27.433553  445201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:15:27.433680  445201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:15:27.481474  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:15:27.481502  445201 cri.go:89] found id: "e599c4c5c4a76341b4461c8b0bfe531f5bbb4d589ede77b570057b5c1343a577"
	I1026 09:15:27.481508  445201 cri.go:89] found id: "aa63b18f8f4e02fa17135a46fafa054d876dbb2489a83cf3dba7b6f037ab860b"
	I1026 09:15:27.481511  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:15:27.481514  445201 cri.go:89] found id: "b139de0a7437bf3a145f81566291d030abce012e3c44caf33b4553c8ac87342b"
	I1026 09:15:27.481550  445201 cri.go:89] found id: ""
	I1026 09:15:27.481619  445201 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:15:27.492856  445201 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:15:27Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:15:27.492967  445201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:15:27.508748  445201 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:15:27.508770  445201 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:15:27.508857  445201 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:15:27.520040  445201 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:15:27.520687  445201 kubeconfig.go:125] found "kubernetes-upgrade-275732" server: "https://192.168.76.2:8443"
	I1026 09:15:27.521344  445201 kapi.go:59] client config for kubernetes-upgrade-275732: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:15:27.521882  445201 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 09:15:27.521908  445201 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 09:15:27.521914  445201 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 09:15:27.522094  445201 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 09:15:27.522110  445201 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 09:15:27.522508  445201 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:15:27.533680  445201 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 09:15:27.533715  445201 kubeadm.go:601] duration metric: took 24.938065ms to restartPrimaryControlPlane
	I1026 09:15:27.533725  445201 kubeadm.go:402] duration metric: took 100.267622ms to StartCluster
	I1026 09:15:27.533761  445201 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:15:27.533844  445201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:15:27.534509  445201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:15:27.534863  445201 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:15:27.535219  445201 config.go:182] Loaded profile config "kubernetes-upgrade-275732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:15:27.535389  445201 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:15:27.535475  445201 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-275732"
	I1026 09:15:27.535488  445201 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-275732"
	W1026 09:15:27.535494  445201 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:15:27.535518  445201 host.go:66] Checking if "kubernetes-upgrade-275732" exists ...
	I1026 09:15:27.535977  445201 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-275732 --format={{.State.Status}}
	I1026 09:15:27.536211  445201 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-275732"
	I1026 09:15:27.536251  445201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-275732"
	I1026 09:15:27.536573  445201 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-275732 --format={{.State.Status}}
	I1026 09:15:27.540949  445201 out.go:179] * Verifying Kubernetes components...
	I1026 09:15:27.543855  445201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:15:27.566362  445201 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:15:27.569590  445201 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:15:27.569616  445201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:15:27.569689  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:15:27.583383  445201 kapi.go:59] client config for kubernetes-upgrade-275732: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:15:27.583668  445201 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-275732"
	W1026 09:15:27.583680  445201 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:15:27.583703  445201 host.go:66] Checking if "kubernetes-upgrade-275732" exists ...
	I1026 09:15:27.584126  445201 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-275732 --format={{.State.Status}}
	I1026 09:15:27.614999  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:15:27.620939  445201 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:27.620961  445201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:15:27.621028  445201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-275732
	I1026 09:15:27.656524  445201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33375 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/kubernetes-upgrade-275732/id_rsa Username:docker}
	I1026 09:15:27.840795  445201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:15:27.841782  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:27.867774  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:15:27.903903  445201 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:15:27.904001  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:28.027259  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.027312  445201 retry.go:31] will retry after 183.529191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:15:28.059137  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.059220  445201 retry.go:31] will retry after 218.553729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.211463  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:28.278093  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:28.318207  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.318355  445201 retry.go:31] will retry after 459.225929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:15:28.387323  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.387423  445201 retry.go:31] will retry after 401.960862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:28.404450  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:28.778271  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:28.789860  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:15:28.904430  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:29.024172  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:29.024265  445201 retry.go:31] will retry after 808.669025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:15:29.124090  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:29.124183  445201 retry.go:31] will retry after 837.345357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:29.404757  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:29.834054  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:29.904660  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:29.962207  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:30.165909  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:30.165946  445201 retry.go:31] will retry after 477.58866ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:15:30.238461  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:30.238495  445201 retry.go:31] will retry after 481.133892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:30.404848  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:30.643935  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:30.720400  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:30.893650  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:30.893683  445201 retry.go:31] will retry after 746.944782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:30.904932  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:31.014103  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:31.014137  445201 retry.go:31] will retry after 1.396743359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:31.404783  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:31.640920  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:15:31.839043  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:31.839075  445201 retry.go:31] will retry after 1.56049774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:31.904371  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:32.405056  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:32.411480  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:32.540117  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:32.540150  445201 retry.go:31] will retry after 1.368790519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:32.904740  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:33.400539  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:33.404895  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:33.574744  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:33.574814  445201 retry.go:31] will retry after 2.393416784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:33.904302  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:33.909670  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:34.095322  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:34.095368  445201 retry.go:31] will retry after 3.237603792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:34.404870  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:34.905063  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:35.404694  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:35.904281  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:35.969101  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:15:36.069829  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:36.069861  445201 retry.go:31] will retry after 3.904171312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:36.404321  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:36.904165  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:37.333602  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:37.400356  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:37.400385  445201 retry.go:31] will retry after 5.626432847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:37.404523  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:37.905112  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:38.404163  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:38.904757  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:39.404175  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:39.904659  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:39.975161  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:15:40.068480  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:40.068560  445201 retry.go:31] will retry after 8.194495876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:40.405028  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:40.904869  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:41.404888  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:41.904722  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:42.404417  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:42.904138  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:43.030850  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:43.109064  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:43.109098  445201 retry.go:31] will retry after 6.672304722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:43.404267  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:43.904178  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:44.404709  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:44.904803  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:45.405086  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:45.904674  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:46.405109  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:46.904192  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:47.404594  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:47.904122  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:48.263267  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:15:48.330828  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:48.330863  445201 retry.go:31] will retry after 7.016001466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:48.404962  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:48.904746  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:49.404124  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:49.781566  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:15:49.846132  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:49.846165  445201 retry.go:31] will retry after 5.011722333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:49.904380  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:50.404285  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:50.904935  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:51.404974  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:51.904470  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:52.404779  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:52.904182  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:53.404131  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:53.904708  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:54.404161  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:54.858998  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:15:54.904474  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:54.920815  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:54.920845  445201 retry.go:31] will retry after 9.336079553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:55.347268  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:15:55.404682  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1026 09:15:55.420264  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:55.420300  445201 retry.go:31] will retry after 10.666997105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:15:55.904321  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:56.405056  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:56.904759  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:57.405100  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:57.904439  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:58.404799  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:58.904973  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:59.404465  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:15:59.904345  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:00.404212  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:00.904958  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:01.404197  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:01.904077  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:02.404590  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:02.904499  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:03.404941  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:03.905027  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:04.258084  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:16:04.321689  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:04.321721  445201 retry.go:31] will retry after 31.382131962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:04.404877  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:04.904178  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:05.404969  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:05.904155  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:06.087548  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:16:06.153519  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:06.153558  445201 retry.go:31] will retry after 15.667803694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:06.404804  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:06.904945  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:07.404211  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:07.904116  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:08.404639  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:08.904181  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:09.404804  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:09.905023  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:10.404893  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:10.904189  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:11.404347  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:11.904957  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:12.404930  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:12.904552  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:13.404586  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:13.904303  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:14.404787  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:14.904265  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:15.404578  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:15.904418  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:16.404814  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:16.904111  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:17.404981  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:17.904508  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:18.404271  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:18.904812  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:19.405072  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:19.904794  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:20.404146  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:20.905038  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:21.404855  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:21.822428  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:16:21.885147  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:21.885176  445201 retry.go:31] will retry after 36.674720772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:21.904451  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:22.404399  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:22.904420  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:23.404143  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:23.904994  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:24.405094  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:24.904290  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:25.404932  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:25.904298  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.404146  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.904522  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.404071  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.904841  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:27.904922  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:27.947240  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:27.947264  445201 cri.go:89] found id: ""
	I1026 09:16:27.947272  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:27.947327  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.951068  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:27.951138  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:27.979942  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:27.979965  445201 cri.go:89] found id: ""
	I1026 09:16:27.979973  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:27.980024  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.984014  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:27.984083  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:28.016316  445201 cri.go:89] found id: ""
	I1026 09:16:28.016345  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.016354  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:28.016360  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:28.016418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:28.073058  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.073079  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.073084  445201 cri.go:89] found id: ""
	I1026 09:16:28.073091  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:28.073155  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.078209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.082591  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:28.082666  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:28.127339  445201 cri.go:89] found id: ""
	I1026 09:16:28.127360  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.127369  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:28.127375  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:28.127432  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:28.162170  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:28.162189  445201 cri.go:89] found id: "bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.162194  445201 cri.go:89] found id: ""
	I1026 09:16:28.162202  445201 logs.go:282] 2 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07]
	I1026 09:16:28.162261  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.166628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.175901  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:28.175972  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:28.204244  445201 cri.go:89] found id: ""
	I1026 09:16:28.204272  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.204281  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:28.204287  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:28.204351  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:28.235132  445201 cri.go:89] found id: ""
	I1026 09:16:28.235153  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.235162  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:28.235171  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:28.235182  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:28.414267  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:28.414344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:28.447217  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:28.447250  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:28.578852  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:28.578925  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.650133  445201 logs.go:123] Gathering logs for kube-controller-manager [bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07] ...
	I1026 09:16:28.650171  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.679124  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:28.679149  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:28.785166  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:28.785258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:28.823935  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:28.823966  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:28.910201  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:28.910223  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:28.910236  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:28.953498  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:28.953549  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.986007  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:28.986032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.528088  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:31.538411  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:31.538477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:31.564156  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:31.564176  445201 cri.go:89] found id: ""
	I1026 09:16:31.564184  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:31.564240  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.567923  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:31.567989  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:31.597656  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:31.597675  445201 cri.go:89] found id: ""
	I1026 09:16:31.597683  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:31.597736  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.601689  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:31.601768  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:31.629019  445201 cri.go:89] found id: ""
	I1026 09:16:31.629044  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.629053  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:31.629060  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:31.629126  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:31.656015  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:31.656036  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.656041  445201 cri.go:89] found id: ""
	I1026 09:16:31.656048  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:31.656102  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.659825  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.663471  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:31.663540  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:31.689125  445201 cri.go:89] found id: ""
	I1026 09:16:31.689152  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.689160  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:31.689167  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:31.689294  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:31.718149  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.718169  445201 cri.go:89] found id: ""
	I1026 09:16:31.718177  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:31.718228  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.721878  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:31.721959  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:31.749071  445201 cri.go:89] found id: ""
	I1026 09:16:31.749098  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.749108  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:31.749115  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:31.749242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:31.776036  445201 cri.go:89] found id: ""
	I1026 09:16:31.776102  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.776146  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:31.776181  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:31.776199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.807779  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:31.807810  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.835831  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:31.835874  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:31.923892  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:31.923944  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:32.064335  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:32.064376  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:32.100733  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:32.100770  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:32.129916  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:32.129948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:32.146572  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:32.146612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:32.212390  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:32.212413  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:32.212440  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:32.303251  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:32.303290  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:34.854846  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:34.867774  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:34.867849  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:34.895116  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:34.895137  445201 cri.go:89] found id: ""
	I1026 09:16:34.895144  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:34.895202  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.899593  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:34.899663  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:34.928083  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:34.928102  445201 cri.go:89] found id: ""
	I1026 09:16:34.928110  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:34.928193  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.932132  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:34.932202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:34.964519  445201 cri.go:89] found id: ""
	I1026 09:16:34.964541  445201 logs.go:282] 0 containers: []
	W1026 09:16:34.964550  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:34.964556  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:34.964614  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:35.003980  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.004001  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:35.004006  445201 cri.go:89] found id: ""
	I1026 09:16:35.004015  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:35.004080  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.009750  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.015183  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:35.015256  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:35.057026  445201 cri.go:89] found id: ""
	I1026 09:16:35.057052  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.057061  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:35.057067  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:35.057154  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:35.093208  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.093231  445201 cri.go:89] found id: ""
	I1026 09:16:35.093240  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:35.093328  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.098073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:35.098201  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:35.146763  445201 cri.go:89] found id: ""
	I1026 09:16:35.146836  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.146866  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:35.146887  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:35.146980  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:35.193198  445201 cri.go:89] found id: ""
	I1026 09:16:35.193224  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.193233  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:35.193276  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:35.193295  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:35.209978  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:35.210006  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.270161  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:35.270197  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.306149  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:35.306226  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:35.404229  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:35.404265  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:35.461433  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:35.461514  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:35.632313  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:35.632388  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 09:16:35.704335  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:16:35.710695  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:35.710727  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:35.710741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	W1026 09:16:35.829972  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.830049  445201 retry.go:31] will retry after 24.298648909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.900420  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:35.900498  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:35.943003  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:35.943071  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.478842  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:38.499470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:38.499549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:38.561966  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:38.561991  445201 cri.go:89] found id: ""
	I1026 09:16:38.562000  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:38.562054  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.565941  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:38.566019  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:38.618045  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:38.618070  445201 cri.go:89] found id: ""
	I1026 09:16:38.618078  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:38.618133  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.622308  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:38.622382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:38.685130  445201 cri.go:89] found id: ""
	I1026 09:16:38.685158  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.685167  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:38.685173  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:38.685237  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:38.741160  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:38.741185  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.741190  445201 cri.go:89] found id: ""
	I1026 09:16:38.741197  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:38.741253  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.745509  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.749859  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:38.749939  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:38.787906  445201 cri.go:89] found id: ""
	I1026 09:16:38.787934  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.787943  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:38.787949  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:38.788007  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:38.843118  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:38.843144  445201 cri.go:89] found id: ""
	I1026 09:16:38.843153  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:38.843209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.851429  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:38.851513  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:38.911991  445201 cri.go:89] found id: ""
	I1026 09:16:38.912018  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.912027  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:38.912033  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:38.912093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:38.964607  445201 cri.go:89] found id: ""
	I1026 09:16:38.964634  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.964643  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:38.964657  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:38.964668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:39.099227  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:39.099252  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:39.099266  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:39.257625  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:39.257710  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:39.323941  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:39.323981  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:39.400574  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:39.400612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:39.477668  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:39.477707  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:39.647471  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:39.647511  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:39.687069  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:39.687105  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:39.730533  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:39.730615  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:39.781743  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:39.781825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:42.397606  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:42.417937  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:42.418016  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:42.495359  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:42.495378  445201 cri.go:89] found id: ""
	I1026 09:16:42.495386  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:42.495440  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.501607  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:42.501703  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:42.551870  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:42.551932  445201 cri.go:89] found id: ""
	I1026 09:16:42.551954  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:42.552046  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.556449  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:42.556566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:42.586577  445201 cri.go:89] found id: ""
	I1026 09:16:42.586646  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.586678  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:42.586703  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:42.586820  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:42.617869  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:42.617892  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:42.617897  445201 cri.go:89] found id: ""
	I1026 09:16:42.617915  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:42.617970  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.625871  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.630165  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:42.630242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:42.666758  445201 cri.go:89] found id: ""
	I1026 09:16:42.666834  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.666858  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:42.666880  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:42.666965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:42.701859  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:42.701926  445201 cri.go:89] found id: ""
	I1026 09:16:42.701948  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:42.702031  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.706073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:42.706198  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:42.734210  445201 cri.go:89] found id: ""
	I1026 09:16:42.734238  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.734247  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:42.734253  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:42.734316  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:42.761520  445201 cri.go:89] found id: ""
	I1026 09:16:42.761543  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.761561  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:42.761577  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:42.761588  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:42.852029  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:42.852074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:42.900883  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:42.900926  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:42.919113  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:42.919143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:43.013913  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:43.013936  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:43.013948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:43.112560  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:43.112603  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:43.165342  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:43.165375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:43.197760  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:43.197792  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:43.240362  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:43.240393  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:43.395807  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:43.395853  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:45.947614  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:45.958234  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:45.958307  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:45.983853  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:45.983876  445201 cri.go:89] found id: ""
	I1026 09:16:45.983884  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:45.983938  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:45.987866  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:45.987940  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:46.014318  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.014341  445201 cri.go:89] found id: ""
	I1026 09:16:46.014350  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:46.014411  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.018353  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:46.018426  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:46.048052  445201 cri.go:89] found id: ""
	I1026 09:16:46.048078  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.048086  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:46.048093  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:46.048200  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:46.076123  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.076197  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.076208  445201 cri.go:89] found id: ""
	I1026 09:16:46.076223  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:46.076283  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.080561  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.084247  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:46.084318  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:46.115050  445201 cri.go:89] found id: ""
	I1026 09:16:46.115076  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.115085  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:46.115104  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:46.115163  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:46.142084  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.142107  445201 cri.go:89] found id: ""
	I1026 09:16:46.142115  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:46.142194  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.146126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:46.146204  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:46.173893  445201 cri.go:89] found id: ""
	I1026 09:16:46.173928  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.173939  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:46.173945  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:46.174009  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:46.207007  445201 cri.go:89] found id: ""
	I1026 09:16:46.207074  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.207098  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:46.207121  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:46.207146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:46.281196  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:46.281257  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:46.281286  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:46.365641  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:46.365679  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.395828  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:46.395859  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:46.429785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:46.429825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:46.450173  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:46.450204  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.493277  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:46.493310  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.552581  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:46.552619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.588862  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:46.588892  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:46.679688  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:46.679719  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.344869  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:49.356555  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:49.356623  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:49.389547  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.389571  445201 cri.go:89] found id: ""
	I1026 09:16:49.389580  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:49.389639  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.393152  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:49.393236  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:49.419217  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:49.419240  445201 cri.go:89] found id: ""
	I1026 09:16:49.419249  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:49.419320  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.423239  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:49.423309  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:49.453234  445201 cri.go:89] found id: ""
	I1026 09:16:49.453257  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.453266  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:49.453272  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:49.453335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:49.483822  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:49.483846  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:49.483851  445201 cri.go:89] found id: ""
	I1026 09:16:49.483859  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:49.483912  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.487530  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.491109  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:49.491181  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:49.520353  445201 cri.go:89] found id: ""
	I1026 09:16:49.520374  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.520383  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:49.520389  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:49.520448  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:49.546473  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:49.546502  445201 cri.go:89] found id: ""
	I1026 09:16:49.546510  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:49.546569  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.550263  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:49.550336  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:49.583559  445201 cri.go:89] found id: ""
	I1026 09:16:49.583588  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.583597  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:49.583604  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:49.583661  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:49.608811  445201 cri.go:89] found id: ""
	I1026 09:16:49.608834  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.608842  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:49.608856  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:49.608867  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:49.641306  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:49.641330  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.786127  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:49.786165  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:49.892036  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:49.892055  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:49.892068  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.995466  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:49.995550  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:50.055401  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:50.055506  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:50.086674  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:50.086794  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:50.174213  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:50.174252  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:50.195967  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:50.196051  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:50.250465  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:50.250500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:52.778890  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:52.789937  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:52.790008  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:52.815609  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:52.815631  445201 cri.go:89] found id: ""
	I1026 09:16:52.815639  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:52.815699  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.819424  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:52.819505  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:52.845893  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:52.845918  445201 cri.go:89] found id: ""
	I1026 09:16:52.845927  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:52.845982  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.849890  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:52.849965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:52.876233  445201 cri.go:89] found id: ""
	I1026 09:16:52.876258  445201 logs.go:282] 0 containers: []
	W1026 09:16:52.876267  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:52.876274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:52.876336  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:52.906825  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:52.906846  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:52.906851  445201 cri.go:89] found id: ""
	I1026 09:16:52.906858  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:52.906914  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.910552  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.913863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:52.913934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:52.941593  445201 cri.go:89] found id: ""
	I1026 09:16:52.941620  445201 logs.go:282] 0 containers: []
	W1026 09:16:52.941629  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:52.941635  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:52.941693  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:52.978764  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:52.978837  445201 cri.go:89] found id: ""
	I1026 09:16:52.978859  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:52.978941  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.984587  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:52.984647  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:53.017265  445201 cri.go:89] found id: ""
	I1026 09:16:53.017289  445201 logs.go:282] 0 containers: []
	W1026 09:16:53.017298  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:53.017305  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:53.017367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:53.067882  445201 cri.go:89] found id: ""
	I1026 09:16:53.067948  445201 logs.go:282] 0 containers: []
	W1026 09:16:53.067969  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:53.068006  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:53.068037  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:53.142921  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:53.143001  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:53.181811  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:53.181841  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:53.202147  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:53.202222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:53.297486  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:53.297551  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:53.297577  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:53.385965  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:53.386003  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:53.425657  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:53.425681  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:53.489466  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:53.489496  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:53.586891  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:53.586932  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:53.788719  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:53.788755  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:56.390883  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:56.401483  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:56.401546  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:56.429199  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:56.429218  445201 cri.go:89] found id: ""
	I1026 09:16:56.429227  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:56.429282  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:56.432974  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:56.433043  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:56.475065  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:56.475089  445201 cri.go:89] found id: ""
	I1026 09:16:56.475097  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:56.475162  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:56.479663  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:56.479738  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:56.515868  445201 cri.go:89] found id: ""
	I1026 09:16:56.515893  445201 logs.go:282] 0 containers: []
	W1026 09:16:56.515903  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:56.515909  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:56.515966  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:56.557550  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:56.557576  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:56.557582  445201 cri.go:89] found id: ""
	I1026 09:16:56.557589  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:56.557675  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:56.561546  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:56.565665  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:56.565762  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:56.595306  445201 cri.go:89] found id: ""
	I1026 09:16:56.595335  445201 logs.go:282] 0 containers: []
	W1026 09:16:56.595344  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:56.595368  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:56.595458  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:56.631948  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:56.631968  445201 cri.go:89] found id: ""
	I1026 09:16:56.631976  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:56.632055  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:56.639394  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:56.639520  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:56.679185  445201 cri.go:89] found id: ""
	I1026 09:16:56.679219  445201 logs.go:282] 0 containers: []
	W1026 09:16:56.679230  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:56.679254  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:56.679346  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:56.712555  445201 cri.go:89] found id: ""
	I1026 09:16:56.712581  445201 logs.go:282] 0 containers: []
	W1026 09:16:56.712589  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:56.712605  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:56.712617  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:56.776780  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:56.776806  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:56.977616  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:56.977678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:56.996952  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:56.996982  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:57.087758  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:57.087824  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:57.087852  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:57.214405  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:57.214896  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:57.294035  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:57.294063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:57.331143  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:57.331180  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:57.364438  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:57.364621  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:57.399166  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:57.399199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:58.560426  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 09:16:58.649484  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:16:58.649577  445201 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 09:17:00.011043  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:00.122918  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:00.122995  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:00.132464  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:17:00.246541  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:00.246566  445201 cri.go:89] found id: ""
	I1026 09:17:00.246580  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:00.246665  445201 ssh_runner.go:195] Run: which crictl
	W1026 09:17:00.331663  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 09:17:00.331764  445201 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 09:17:00.332015  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:00.332083  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:00.335950  445201 out.go:179] * Enabled addons: 
	I1026 09:17:00.342593  445201 addons.go:514] duration metric: took 1m32.807184591s for enable addons: enabled=[]
	I1026 09:17:00.429803  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:00.429832  445201 cri.go:89] found id: ""
	I1026 09:17:00.429842  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:00.429955  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:00.435396  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:00.435475  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:00.491207  445201 cri.go:89] found id: ""
	I1026 09:17:00.491236  445201 logs.go:282] 0 containers: []
	W1026 09:17:00.491246  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:00.491253  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:00.491381  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:00.569586  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:00.569614  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:00.569619  445201 cri.go:89] found id: ""
	I1026 09:17:00.569628  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:00.569688  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:00.575802  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:00.580757  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:00.580832  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:00.617202  445201 cri.go:89] found id: ""
	I1026 09:17:00.617224  445201 logs.go:282] 0 containers: []
	W1026 09:17:00.617234  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:00.617241  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:00.617301  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:00.656899  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:00.656919  445201 cri.go:89] found id: ""
	I1026 09:17:00.656928  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:00.656982  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:00.661346  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:00.661422  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:00.692487  445201 cri.go:89] found id: ""
	I1026 09:17:00.692510  445201 logs.go:282] 0 containers: []
	W1026 09:17:00.692518  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:00.692524  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:00.692579  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:00.741585  445201 cri.go:89] found id: ""
	I1026 09:17:00.741609  445201 logs.go:282] 0 containers: []
	W1026 09:17:00.741617  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:00.741633  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:00.741645  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:00.775940  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:00.776034  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:00.876256  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:00.876386  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:01.060476  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:01.060556  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:01.079495  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:01.079525  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:01.171174  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:01.171195  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:01.171208  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:01.273795  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:01.273833  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:01.362562  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:01.362683  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:01.436711  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:01.436745  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:01.479508  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:01.479548  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:04.014576  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:04.028225  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:04.028307  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:04.064243  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:04.064270  445201 cri.go:89] found id: ""
	I1026 09:17:04.064279  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:04.064343  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:04.069378  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:04.069477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:04.099219  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:04.099244  445201 cri.go:89] found id: ""
	I1026 09:17:04.099253  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:04.099319  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:04.104152  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:04.104239  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:04.135047  445201 cri.go:89] found id: ""
	I1026 09:17:04.135076  445201 logs.go:282] 0 containers: []
	W1026 09:17:04.135084  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:04.135092  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:04.135152  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:04.168179  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:04.168207  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:04.168214  445201 cri.go:89] found id: ""
	I1026 09:17:04.168222  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:04.168299  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:04.175462  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:04.180292  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:04.180389  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:04.221287  445201 cri.go:89] found id: ""
	I1026 09:17:04.221315  445201 logs.go:282] 0 containers: []
	W1026 09:17:04.221324  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:04.221332  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:04.221390  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:04.252608  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:04.252634  445201 cri.go:89] found id: ""
	I1026 09:17:04.252643  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:04.252716  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:04.257651  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:04.257765  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:04.294865  445201 cri.go:89] found id: ""
	I1026 09:17:04.294915  445201 logs.go:282] 0 containers: []
	W1026 09:17:04.294924  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:04.294931  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:04.295003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:04.332629  445201 cri.go:89] found id: ""
	I1026 09:17:04.332658  445201 logs.go:282] 0 containers: []
	W1026 09:17:04.332668  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:04.332682  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:04.332703  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:04.517696  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:04.517738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:04.538267  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:04.538302  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:04.650004  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:04.650025  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:04.650039  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:04.701032  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:04.701067  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:04.790478  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:04.790519  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:04.825968  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:04.825999  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:04.921014  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:04.921051  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:05.015589  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:05.015632  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:05.053500  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:05.053544  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:07.600995  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:07.632730  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:07.632804  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:07.721308  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:07.721334  445201 cri.go:89] found id: ""
	I1026 09:17:07.721342  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:07.721401  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:07.727472  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:07.727545  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:07.808022  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:07.808046  445201 cri.go:89] found id: ""
	I1026 09:17:07.808055  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:07.808124  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:07.821706  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:07.821779  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:07.965481  445201 cri.go:89] found id: ""
	I1026 09:17:07.965510  445201 logs.go:282] 0 containers: []
	W1026 09:17:07.965519  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:07.965526  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:07.965583  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:08.066383  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:08.066403  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:08.066408  445201 cri.go:89] found id: ""
	I1026 09:17:08.066415  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:08.066480  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:08.078648  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:08.084873  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:08.084947  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:08.132993  445201 cri.go:89] found id: ""
	I1026 09:17:08.133022  445201 logs.go:282] 0 containers: []
	W1026 09:17:08.133031  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:08.133037  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:08.133093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:08.176177  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:08.176203  445201 cri.go:89] found id: ""
	I1026 09:17:08.176211  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:08.176266  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:08.181976  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:08.182044  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:08.231980  445201 cri.go:89] found id: ""
	I1026 09:17:08.232009  445201 logs.go:282] 0 containers: []
	W1026 09:17:08.232020  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:08.232026  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:08.232084  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:08.266369  445201 cri.go:89] found id: ""
	I1026 09:17:08.266399  445201 logs.go:282] 0 containers: []
	W1026 09:17:08.266408  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:08.266423  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:08.266440  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:08.314010  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:08.314045  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:08.398360  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:08.398399  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:08.570802  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:08.570890  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:08.654281  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:08.654301  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:08.654315  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:08.714237  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:08.714273  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:08.779510  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:08.779552  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:08.806430  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:08.806465  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:08.887548  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:08.887583  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:08.905319  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:08.905350  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:11.492172  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:11.503838  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:11.503915  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:11.533686  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:11.533706  445201 cri.go:89] found id: ""
	I1026 09:17:11.533714  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:11.533770  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:11.537393  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:11.537459  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:11.565011  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:11.565034  445201 cri.go:89] found id: ""
	I1026 09:17:11.565042  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:11.565097  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:11.575114  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:11.575185  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:11.613319  445201 cri.go:89] found id: ""
	I1026 09:17:11.613342  445201 logs.go:282] 0 containers: []
	W1026 09:17:11.613350  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:11.613357  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:11.613418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:11.652071  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:11.652122  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:11.652130  445201 cri.go:89] found id: ""
	I1026 09:17:11.652137  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:11.652193  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:11.656326  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:11.660268  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:11.660338  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:11.701107  445201 cri.go:89] found id: ""
	I1026 09:17:11.701128  445201 logs.go:282] 0 containers: []
	W1026 09:17:11.701136  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:11.701142  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:11.701199  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:11.737026  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:11.737045  445201 cri.go:89] found id: ""
	I1026 09:17:11.737053  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:11.737107  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:11.741018  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:11.741088  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:11.775175  445201 cri.go:89] found id: ""
	I1026 09:17:11.775196  445201 logs.go:282] 0 containers: []
	W1026 09:17:11.775205  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:11.775210  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:11.775269  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:11.804537  445201 cri.go:89] found id: ""
	I1026 09:17:11.804614  445201 logs.go:282] 0 containers: []
	W1026 09:17:11.804639  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:11.804681  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:11.804712  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:11.885057  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:11.885129  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:11.885156  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:12.004349  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:12.004389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:12.052739  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:12.052817  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:12.083759  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:12.083783  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:12.165593  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:12.165663  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:12.346969  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:12.347049  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:12.364659  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:12.364685  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:12.439551  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:12.439630  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:12.469962  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:12.469987  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:15.029346  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:15.043668  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:15.043747  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:15.083800  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:15.083825  445201 cri.go:89] found id: ""
	I1026 09:17:15.083834  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:15.083892  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.088855  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:15.088936  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:15.126785  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:15.126812  445201 cri.go:89] found id: ""
	I1026 09:17:15.126826  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:15.126885  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.131783  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:15.131858  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:15.164554  445201 cri.go:89] found id: ""
	I1026 09:17:15.164581  445201 logs.go:282] 0 containers: []
	W1026 09:17:15.164590  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:15.164597  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:15.164658  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:15.203739  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:15.203763  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:15.203770  445201 cri.go:89] found id: ""
	I1026 09:17:15.203777  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:15.203831  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.208118  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.212212  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:15.212287  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:15.245119  445201 cri.go:89] found id: ""
	I1026 09:17:15.245146  445201 logs.go:282] 0 containers: []
	W1026 09:17:15.245154  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:15.245160  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:15.245215  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:15.275761  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:15.275784  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:15.275789  445201 cri.go:89] found id: ""
	I1026 09:17:15.275797  445201 logs.go:282] 2 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:15.275848  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.280372  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:15.284746  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:15.284817  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:15.315010  445201 cri.go:89] found id: ""
	I1026 09:17:15.315035  445201 logs.go:282] 0 containers: []
	W1026 09:17:15.315043  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:15.315050  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:15.315108  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:15.343595  445201 cri.go:89] found id: ""
	I1026 09:17:15.343621  445201 logs.go:282] 0 containers: []
	W1026 09:17:15.343630  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:15.343639  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:15.343651  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:15.439885  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:15.439908  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:15.439923  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:15.548113  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:15.548150  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:15.607597  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:15.607634  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:15.641743  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:15.641769  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:15.673914  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:15.673939  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:15.768312  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:15.768393  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:15.843934  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:15.843961  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:16.018220  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:16.018257  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:16.035678  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:16.035711  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:16.076033  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:16.076073  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:18.623115  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:18.635318  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:18.635437  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:18.664267  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:18.664286  445201 cri.go:89] found id: ""
	I1026 09:17:18.664294  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:18.664348  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.668988  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:18.669056  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:18.704875  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:18.704893  445201 cri.go:89] found id: ""
	I1026 09:17:18.704901  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:18.704955  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.709118  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:18.709188  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:18.743182  445201 cri.go:89] found id: ""
	I1026 09:17:18.743205  445201 logs.go:282] 0 containers: []
	W1026 09:17:18.743214  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:18.743220  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:18.743279  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:18.795174  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:18.795193  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:18.795198  445201 cri.go:89] found id: ""
	I1026 09:17:18.795211  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:18.795267  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.800410  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.805501  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:18.805601  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:18.846793  445201 cri.go:89] found id: ""
	I1026 09:17:18.846863  445201 logs.go:282] 0 containers: []
	W1026 09:17:18.846878  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:18.846886  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:18.846978  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:18.901469  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:18.901492  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:18.901498  445201 cri.go:89] found id: ""
	I1026 09:17:18.901512  445201 logs.go:282] 2 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:18.901633  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.906356  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:18.911449  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:18.911562  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:18.950370  445201 cri.go:89] found id: ""
	I1026 09:17:18.950398  445201 logs.go:282] 0 containers: []
	W1026 09:17:18.950411  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:18.950451  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:18.950539  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:18.995744  445201 cri.go:89] found id: ""
	I1026 09:17:18.995778  445201 logs.go:282] 0 containers: []
	W1026 09:17:18.995786  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:18.995830  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:18.995852  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:19.096507  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:19.096529  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:19.096585  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:19.192337  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:19.192382  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:19.235859  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:19.235892  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:19.300208  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:19.300247  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:19.328309  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:19.328387  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:19.426798  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:19.426884  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:19.594464  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:19.594520  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:19.612682  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:19.612766  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:19.651545  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:19.651622  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:19.686666  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:19.686752  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:22.224226  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:22.237133  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:22.237276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:22.277189  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:22.277269  445201 cri.go:89] found id: ""
	I1026 09:17:22.277292  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:22.277388  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.282180  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:22.282312  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:22.325565  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:22.325642  445201 cri.go:89] found id: ""
	I1026 09:17:22.325666  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:22.325759  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.330160  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:22.330297  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:22.362288  445201 cri.go:89] found id: ""
	I1026 09:17:22.362361  445201 logs.go:282] 0 containers: []
	W1026 09:17:22.362385  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:22.362405  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:22.362536  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:22.407329  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:22.407406  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:22.407426  445201 cri.go:89] found id: ""
	I1026 09:17:22.407448  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:22.407563  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.412207  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.416828  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:22.416981  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:22.450329  445201 cri.go:89] found id: ""
	I1026 09:17:22.450406  445201 logs.go:282] 0 containers: []
	W1026 09:17:22.450428  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:22.450450  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:22.450566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:22.481250  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:22.481327  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:22.481346  445201 cri.go:89] found id: ""
	I1026 09:17:22.481399  445201 logs.go:282] 2 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:17:22.481508  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.487029  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:22.491809  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:22.491978  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:22.524609  445201 cri.go:89] found id: ""
	I1026 09:17:22.524693  445201 logs.go:282] 0 containers: []
	W1026 09:17:22.524734  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:22.524769  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:22.524879  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:22.560412  445201 cri.go:89] found id: ""
	I1026 09:17:22.560486  445201 logs.go:282] 0 containers: []
	W1026 09:17:22.560510  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:22.560534  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:22.560578  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:22.734041  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:22.734081  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:22.757362  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:22.757392  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:22.827541  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:22.827570  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:22.861901  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:22.861930  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:22.909377  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:22.909405  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:22.987559  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:22.987580  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:22.987597  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:23.078056  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:23.078095  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:23.114611  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:23.114645  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:23.178386  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:17:23.178420  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:17:23.207766  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:23.207796  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:25.798446  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:25.810537  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:25.810612  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:25.846321  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:25.846340  445201 cri.go:89] found id: ""
	I1026 09:17:25.846347  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:25.846406  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:25.850780  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:25.850843  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:25.881682  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:25.881702  445201 cri.go:89] found id: ""
	I1026 09:17:25.881710  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:25.881763  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:25.886514  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:25.886583  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:25.921189  445201 cri.go:89] found id: ""
	I1026 09:17:25.921216  445201 logs.go:282] 0 containers: []
	W1026 09:17:25.921227  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:25.921233  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:25.921289  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:25.952737  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:25.952816  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:25.952842  445201 cri.go:89] found id: ""
	I1026 09:17:25.952877  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:25.952956  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:25.957590  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:25.961746  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:25.961813  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:25.993157  445201 cri.go:89] found id: ""
	I1026 09:17:25.993223  445201 logs.go:282] 0 containers: []
	W1026 09:17:25.993253  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:25.993273  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:25.993380  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:26.047721  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:26.047796  445201 cri.go:89] found id: ""
	I1026 09:17:26.047818  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:26.047906  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:26.052322  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:26.052466  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:26.097907  445201 cri.go:89] found id: ""
	I1026 09:17:26.097983  445201 logs.go:282] 0 containers: []
	W1026 09:17:26.098006  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:26.098047  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:26.098122  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:26.140940  445201 cri.go:89] found id: ""
	I1026 09:17:26.141016  445201 logs.go:282] 0 containers: []
	W1026 09:17:26.141039  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:26.141079  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:26.141109  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:26.233596  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:26.233673  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:26.264810  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:26.264836  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:26.297051  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:26.297127  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:26.405507  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:26.405548  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:26.450355  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:26.450393  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:26.532940  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:26.532980  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:26.566181  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:26.566256  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:26.750689  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:26.750774  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:26.769499  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:26.769575  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:26.855233  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:29.356662  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:29.390100  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:29.390192  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:29.454941  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:29.454962  445201 cri.go:89] found id: ""
	I1026 09:17:29.454974  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:29.455035  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:29.467155  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:29.467240  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:29.525060  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:29.525082  445201 cri.go:89] found id: ""
	I1026 09:17:29.525090  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:29.525145  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:29.531931  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:29.532108  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:29.577152  445201 cri.go:89] found id: ""
	I1026 09:17:29.577177  445201 logs.go:282] 0 containers: []
	W1026 09:17:29.577199  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:29.577206  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:29.577276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:29.631813  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:29.631836  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:29.631842  445201 cri.go:89] found id: ""
	I1026 09:17:29.631849  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:29.631907  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:29.636572  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:29.640728  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:29.640889  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:29.683489  445201 cri.go:89] found id: ""
	I1026 09:17:29.683526  445201 logs.go:282] 0 containers: []
	W1026 09:17:29.683536  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:29.683542  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:29.683608  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:29.714015  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:29.714050  445201 cri.go:89] found id: ""
	I1026 09:17:29.714059  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:29.714121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:29.720528  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:29.720655  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:29.757984  445201 cri.go:89] found id: ""
	I1026 09:17:29.758064  445201 logs.go:282] 0 containers: []
	W1026 09:17:29.758096  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:29.758115  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:29.758213  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:29.791755  445201 cri.go:89] found id: ""
	I1026 09:17:29.791841  445201 logs.go:282] 0 containers: []
	W1026 09:17:29.791864  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:29.791910  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:29.791952  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:29.830813  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:29.830894  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:30.054235  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:30.054277  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:30.130045  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:30.130083  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:30.213898  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:30.213991  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:30.280301  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:30.280389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:30.394643  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:30.394735  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:30.484166  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:30.484246  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:30.512591  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:30.512669  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:30.642840  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:30.642905  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:30.642934  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:33.284387  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:33.299752  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:33.299815  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:33.334871  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:33.334890  445201 cri.go:89] found id: ""
	I1026 09:17:33.334898  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:33.334952  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:33.339119  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:33.339188  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:33.377730  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:33.377805  445201 cri.go:89] found id: ""
	I1026 09:17:33.377828  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:33.377913  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:33.382189  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:33.382253  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:33.434675  445201 cri.go:89] found id: ""
	I1026 09:17:33.434696  445201 logs.go:282] 0 containers: []
	W1026 09:17:33.434705  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:33.434754  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:33.434821  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:33.478784  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:33.478804  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:33.478809  445201 cri.go:89] found id: ""
	I1026 09:17:33.478816  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:33.478872  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:33.483293  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:33.487077  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:33.487149  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:33.523334  445201 cri.go:89] found id: ""
	I1026 09:17:33.523412  445201 logs.go:282] 0 containers: []
	W1026 09:17:33.523435  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:33.523459  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:33.523555  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:33.559461  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:33.559481  445201 cri.go:89] found id: ""
	I1026 09:17:33.559489  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:33.559548  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:33.563627  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:33.563753  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:33.611288  445201 cri.go:89] found id: ""
	I1026 09:17:33.611310  445201 logs.go:282] 0 containers: []
	W1026 09:17:33.611318  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:33.611325  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:33.611383  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:33.647394  445201 cri.go:89] found id: ""
	I1026 09:17:33.647471  445201 logs.go:282] 0 containers: []
	W1026 09:17:33.647493  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:33.647541  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:33.647569  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:33.684606  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:33.684689  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:33.702158  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:33.702236  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:33.801609  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:33.801715  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:33.843257  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:33.843282  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:33.870651  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:33.870766  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:34.036846  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:34.036889  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:34.116045  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:34.116064  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:34.116084  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:34.162017  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:34.162049  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:34.223052  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:34.223113  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:36.820310  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:36.831970  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:36.832046  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:36.860215  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:36.860237  445201 cri.go:89] found id: ""
	I1026 09:17:36.860245  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:36.860301  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:36.867408  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:36.867477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:36.897616  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:36.897638  445201 cri.go:89] found id: ""
	I1026 09:17:36.897646  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:36.897702  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:36.901438  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:36.901508  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:36.930919  445201 cri.go:89] found id: ""
	I1026 09:17:36.930949  445201 logs.go:282] 0 containers: []
	W1026 09:17:36.930959  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:36.930968  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:36.931025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:36.968805  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:36.968824  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:36.968828  445201 cri.go:89] found id: ""
	I1026 09:17:36.968835  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:36.968890  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:36.972852  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:36.976357  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:36.976428  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:37.006208  445201 cri.go:89] found id: ""
	I1026 09:17:37.006235  445201 logs.go:282] 0 containers: []
	W1026 09:17:37.006245  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:37.006253  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:37.006323  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:37.049671  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:37.049690  445201 cri.go:89] found id: ""
	I1026 09:17:37.049698  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:37.049755  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:37.056595  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:37.056662  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:37.090287  445201 cri.go:89] found id: ""
	I1026 09:17:37.090395  445201 logs.go:282] 0 containers: []
	W1026 09:17:37.090420  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:37.090452  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:37.090533  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:37.124754  445201 cri.go:89] found id: ""
	I1026 09:17:37.124776  445201 logs.go:282] 0 containers: []
	W1026 09:17:37.124784  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:37.124797  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:37.124813  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:37.158290  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:37.158361  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:37.365418  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:37.365503  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:37.396057  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:37.396144  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:37.514257  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:37.514344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:37.575988  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:37.576024  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:37.686529  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:37.686566  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:37.737378  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:37.737417  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:37.816257  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:37.816289  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:37.816303  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:37.887031  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:37.887107  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:40.435974  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:40.447109  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:40.447218  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:40.473395  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:40.473416  445201 cri.go:89] found id: ""
	I1026 09:17:40.473424  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:40.473477  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:40.477695  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:40.477771  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:40.507777  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:40.507797  445201 cri.go:89] found id: ""
	I1026 09:17:40.507805  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:40.507870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:40.511796  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:40.511868  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:40.538367  445201 cri.go:89] found id: ""
	I1026 09:17:40.538392  445201 logs.go:282] 0 containers: []
	W1026 09:17:40.538401  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:40.538408  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:40.538469  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:40.566522  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:40.566546  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:40.566551  445201 cri.go:89] found id: ""
	I1026 09:17:40.566559  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:40.566623  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:40.570673  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:40.574570  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:40.574738  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:40.606106  445201 cri.go:89] found id: ""
	I1026 09:17:40.606133  445201 logs.go:282] 0 containers: []
	W1026 09:17:40.606142  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:40.606149  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:40.606255  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:40.634376  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:40.634439  445201 cri.go:89] found id: ""
	I1026 09:17:40.634463  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:40.634535  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:40.638318  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:40.638439  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:40.668515  445201 cri.go:89] found id: ""
	I1026 09:17:40.668553  445201 logs.go:282] 0 containers: []
	W1026 09:17:40.668562  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:40.668588  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:40.668659  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:40.696010  445201 cri.go:89] found id: ""
	I1026 09:17:40.696086  445201 logs.go:282] 0 containers: []
	W1026 09:17:40.696117  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:40.696150  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:40.696175  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:40.796598  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:40.796640  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:40.842610  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:40.842643  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:40.928591  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:40.928631  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:40.969537  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:40.969562  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:41.168415  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:41.168500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:41.190471  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:41.190543  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:41.285524  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:41.285590  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:41.285609  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:41.356876  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:41.357038  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:41.416899  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:41.416928  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:43.946848  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:43.958106  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:43.958175  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:43.988138  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:43.988162  445201 cri.go:89] found id: ""
	I1026 09:17:43.988171  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:43.988228  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:43.992141  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:43.992215  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:44.026389  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:44.026408  445201 cri.go:89] found id: ""
	I1026 09:17:44.026417  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:44.026478  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:44.035139  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:44.035220  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:44.070487  445201 cri.go:89] found id: ""
	I1026 09:17:44.070510  445201 logs.go:282] 0 containers: []
	W1026 09:17:44.070519  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:44.070525  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:44.070583  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:44.114801  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:44.114820  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:44.114825  445201 cri.go:89] found id: ""
	I1026 09:17:44.114832  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:44.114902  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:44.119486  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:44.123803  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:44.123871  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:44.151754  445201 cri.go:89] found id: ""
	I1026 09:17:44.151774  445201 logs.go:282] 0 containers: []
	W1026 09:17:44.151783  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:44.151788  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:44.151844  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:44.185727  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:44.185745  445201 cri.go:89] found id: ""
	I1026 09:17:44.185753  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:44.185815  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:44.189605  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:44.189735  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:44.221032  445201 cri.go:89] found id: ""
	I1026 09:17:44.221053  445201 logs.go:282] 0 containers: []
	W1026 09:17:44.221060  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:44.221066  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:44.221125  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:44.255897  445201 cri.go:89] found id: ""
	I1026 09:17:44.255924  445201 logs.go:282] 0 containers: []
	W1026 09:17:44.255935  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:44.255952  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:44.255965  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:44.296379  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:44.296408  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:44.342895  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:44.342920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:44.541953  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:44.541994  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:44.559641  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:44.559672  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:44.656243  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:44.656271  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:44.656285  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:44.770924  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:44.770978  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:44.888026  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:44.888080  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:44.936399  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:44.936448  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:45.004377  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:45.004475  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:47.541060  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:47.554134  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:47.554202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:47.590331  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:47.590371  445201 cri.go:89] found id: ""
	I1026 09:17:47.590381  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:47.590431  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:47.594465  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:47.594553  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:47.630170  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:47.630194  445201 cri.go:89] found id: ""
	I1026 09:17:47.630203  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:47.630258  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:47.635009  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:47.635081  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:47.675052  445201 cri.go:89] found id: ""
	I1026 09:17:47.675105  445201 logs.go:282] 0 containers: []
	W1026 09:17:47.675117  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:47.675123  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:47.675180  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:47.708296  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:47.708313  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:47.708318  445201 cri.go:89] found id: ""
	I1026 09:17:47.708326  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:47.708377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:47.712945  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:47.716743  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:47.716862  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:47.756376  445201 cri.go:89] found id: ""
	I1026 09:17:47.756396  445201 logs.go:282] 0 containers: []
	W1026 09:17:47.756404  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:47.756410  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:47.756469  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:47.789519  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:47.789537  445201 cri.go:89] found id: ""
	I1026 09:17:47.789545  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:47.789599  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:47.794605  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:47.794672  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:47.825006  445201 cri.go:89] found id: ""
	I1026 09:17:47.825027  445201 logs.go:282] 0 containers: []
	W1026 09:17:47.825035  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:47.825041  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:47.825093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:47.857655  445201 cri.go:89] found id: ""
	I1026 09:17:47.857673  445201 logs.go:282] 0 containers: []
	W1026 09:17:47.857681  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:47.857694  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:47.857708  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:47.876507  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:47.876530  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:47.973080  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:47.973096  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:47.973108  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:48.091988  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:48.092234  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:48.154379  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:48.154419  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:48.184424  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:48.184451  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:48.215501  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:48.215526  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:48.415875  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:48.415913  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:48.454132  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:48.454212  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:48.544882  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:48.544918  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:51.083924  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:51.096950  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:51.097036  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:51.142165  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:51.142192  445201 cri.go:89] found id: ""
	I1026 09:17:51.142202  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:51.142259  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:51.147443  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:51.147521  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:51.187948  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:51.187972  445201 cri.go:89] found id: ""
	I1026 09:17:51.187981  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:51.188044  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:51.192794  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:51.192871  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:51.219784  445201 cri.go:89] found id: ""
	I1026 09:17:51.219803  445201 logs.go:282] 0 containers: []
	W1026 09:17:51.219811  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:51.219818  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:51.219886  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:51.251300  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:51.251324  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:51.251329  445201 cri.go:89] found id: ""
	I1026 09:17:51.251337  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:51.251396  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:51.256003  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:51.260199  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:51.260277  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:51.293495  445201 cri.go:89] found id: ""
	I1026 09:17:51.293522  445201 logs.go:282] 0 containers: []
	W1026 09:17:51.293531  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:51.293537  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:51.293592  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:51.331888  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:51.331911  445201 cri.go:89] found id: ""
	I1026 09:17:51.331919  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:51.331974  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:51.336671  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:51.336749  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:51.368354  445201 cri.go:89] found id: ""
	I1026 09:17:51.368379  445201 logs.go:282] 0 containers: []
	W1026 09:17:51.368388  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:51.368393  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:51.368448  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:51.412142  445201 cri.go:89] found id: ""
	I1026 09:17:51.412164  445201 logs.go:282] 0 containers: []
	W1026 09:17:51.412171  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:51.412183  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:51.412195  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:51.453568  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:51.453596  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:51.489651  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:51.489678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:51.586879  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:51.586915  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:51.783252  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:51.783291  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:51.876057  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:51.876098  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:51.945927  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:51.945961  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:51.978994  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:51.979021  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:51.996506  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:51.996533  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:52.083683  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:52.083705  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:52.083720  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:54.626820  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:54.639249  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:54.639320  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:54.677290  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:54.677310  445201 cri.go:89] found id: ""
	I1026 09:17:54.677318  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:54.677377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:54.681600  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:54.681671  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:54.718271  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:54.718289  445201 cri.go:89] found id: ""
	I1026 09:17:54.718296  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:54.718351  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:54.729554  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:54.729633  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:54.761590  445201 cri.go:89] found id: ""
	I1026 09:17:54.761610  445201 logs.go:282] 0 containers: []
	W1026 09:17:54.761619  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:54.761689  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:54.761757  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:54.810317  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:54.810338  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:54.810343  445201 cri.go:89] found id: ""
	I1026 09:17:54.810350  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:54.810403  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:54.814520  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:54.819281  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:54.819347  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:54.861086  445201 cri.go:89] found id: ""
	I1026 09:17:54.861108  445201 logs.go:282] 0 containers: []
	W1026 09:17:54.861115  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:54.861121  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:54.861179  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:54.905660  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:54.905683  445201 cri.go:89] found id: ""
	I1026 09:17:54.905691  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:54.905743  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:54.912342  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:54.912418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:54.969914  445201 cri.go:89] found id: ""
	I1026 09:17:54.969939  445201 logs.go:282] 0 containers: []
	W1026 09:17:54.969947  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:54.969953  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:54.970011  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:55.002168  445201 cri.go:89] found id: ""
	I1026 09:17:55.002191  445201 logs.go:282] 0 containers: []
	W1026 09:17:55.002199  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:55.002218  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:55.002232  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:55.113954  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:55.114037  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:55.331314  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:55.331391  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:55.390543  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:55.390617  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:55.427217  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:55.427293  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:17:55.471142  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:55.471224  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:55.489222  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:55.489300  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:55.571472  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:55.571542  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:55.571569  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:55.697626  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:55.697710  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:55.759064  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:55.759143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:58.310591  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:17:58.321692  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:17:58.321759  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:17:58.352338  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:58.352362  445201 cri.go:89] found id: ""
	I1026 09:17:58.352372  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:17:58.352448  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:58.356485  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:17:58.356560  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:17:58.396130  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:58.396152  445201 cri.go:89] found id: ""
	I1026 09:17:58.396161  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:17:58.396217  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:58.400112  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:17:58.400184  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:17:58.428571  445201 cri.go:89] found id: ""
	I1026 09:17:58.428595  445201 logs.go:282] 0 containers: []
	W1026 09:17:58.428603  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:17:58.428609  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:17:58.428667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:17:58.460549  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:58.460575  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:58.460580  445201 cri.go:89] found id: ""
	I1026 09:17:58.460587  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:17:58.460665  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:58.464594  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:58.468207  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:17:58.468322  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:17:58.495597  445201 cri.go:89] found id: ""
	I1026 09:17:58.495670  445201 logs.go:282] 0 containers: []
	W1026 09:17:58.495693  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:17:58.495712  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:17:58.495797  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:17:58.530100  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:58.530168  445201 cri.go:89] found id: ""
	I1026 09:17:58.530189  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:17:58.530271  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:17:58.534470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:17:58.534549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:17:58.563566  445201 cri.go:89] found id: ""
	I1026 09:17:58.563589  445201 logs.go:282] 0 containers: []
	W1026 09:17:58.563597  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:17:58.563604  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:17:58.563667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:17:58.591497  445201 cri.go:89] found id: ""
	I1026 09:17:58.591521  445201 logs.go:282] 0 containers: []
	W1026 09:17:58.591530  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:17:58.591546  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:17:58.591557  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:17:58.772002  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:17:58.772038  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:17:58.788721  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:17:58.788750  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:17:58.865951  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:17:58.866016  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:17:58.866043  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:17:58.926005  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:17:58.926043  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:17:58.952675  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:17:58.952748  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:17:59.034821  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:17:59.034858  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:17:59.123656  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:17:59.123695  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:17:59.160488  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:17:59.160525  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:17:59.192749  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:17:59.192781  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:01.726699  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:01.741023  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:01.741090  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:01.791931  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:01.791953  445201 cri.go:89] found id: ""
	I1026 09:18:01.791962  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:01.792018  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:01.799396  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:01.799468  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:01.838045  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:01.838065  445201 cri.go:89] found id: ""
	I1026 09:18:01.838074  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:01.838136  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:01.844136  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:01.844262  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:01.889433  445201 cri.go:89] found id: ""
	I1026 09:18:01.889455  445201 logs.go:282] 0 containers: []
	W1026 09:18:01.889464  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:01.889470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:01.889529  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:01.936530  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:01.936549  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:01.936554  445201 cri.go:89] found id: ""
	I1026 09:18:01.936561  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:01.936614  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:01.941684  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:01.945808  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:01.945875  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:01.975226  445201 cri.go:89] found id: ""
	I1026 09:18:01.975306  445201 logs.go:282] 0 containers: []
	W1026 09:18:01.975329  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:01.975352  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:01.975458  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:02.015223  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:02.015298  445201 cri.go:89] found id: ""
	I1026 09:18:02.015320  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:02.015461  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:02.021807  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:02.021934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:02.065882  445201 cri.go:89] found id: ""
	I1026 09:18:02.065986  445201 logs.go:282] 0 containers: []
	W1026 09:18:02.066009  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:02.066031  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:02.066134  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:02.095640  445201 cri.go:89] found id: ""
	I1026 09:18:02.095716  445201 logs.go:282] 0 containers: []
	W1026 09:18:02.095747  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:02.095777  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:02.095803  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:02.192857  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:02.192879  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:02.192892  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:02.252133  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:02.252168  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:02.287383  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:02.287409  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:02.319631  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:02.319658  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:02.412377  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:02.412455  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:02.444911  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:02.444936  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:02.632222  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:02.632258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:02.648595  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:02.648686  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:02.738608  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:02.738647  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:05.306836  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:05.319227  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:05.319299  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:05.350470  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:05.350493  445201 cri.go:89] found id: ""
	I1026 09:18:05.350502  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:05.350558  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:05.354352  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:05.354430  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:05.389894  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:05.389913  445201 cri.go:89] found id: ""
	I1026 09:18:05.389922  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:05.389976  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:05.393819  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:05.393902  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:05.420702  445201 cri.go:89] found id: ""
	I1026 09:18:05.420728  445201 logs.go:282] 0 containers: []
	W1026 09:18:05.420737  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:05.420744  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:05.420807  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:05.455023  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:05.455055  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:05.455063  445201 cri.go:89] found id: ""
	I1026 09:18:05.455070  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:05.455128  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:05.463476  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:05.467553  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:05.467629  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:05.513320  445201 cri.go:89] found id: ""
	I1026 09:18:05.513347  445201 logs.go:282] 0 containers: []
	W1026 09:18:05.513357  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:05.513363  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:05.513423  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:05.562436  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:05.562460  445201 cri.go:89] found id: ""
	I1026 09:18:05.562468  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:05.562522  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:05.568334  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:05.568412  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:05.595638  445201 cri.go:89] found id: ""
	I1026 09:18:05.595663  445201 logs.go:282] 0 containers: []
	W1026 09:18:05.595672  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:05.595679  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:05.595740  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:05.624625  445201 cri.go:89] found id: ""
	I1026 09:18:05.624652  445201 logs.go:282] 0 containers: []
	W1026 09:18:05.624661  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:05.624675  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:05.624717  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:05.817785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:05.817826  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:05.840527  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:05.840552  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:05.938806  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:05.938836  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:05.938852  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:06.031708  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:06.031751  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:06.063907  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:06.063938  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:06.171361  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:06.171402  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:06.206609  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:06.206682  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:06.326579  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:06.326618  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:06.369633  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:06.369662  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:08.916253  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:08.927564  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:08.927633  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:08.963576  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:08.963599  445201 cri.go:89] found id: ""
	I1026 09:18:08.963608  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:08.963662  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:08.967913  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:08.967986  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:08.995880  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:08.995904  445201 cri.go:89] found id: ""
	I1026 09:18:08.995913  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:08.995968  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:09.006560  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:09.006639  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:09.050095  445201 cri.go:89] found id: ""
	I1026 09:18:09.050121  445201 logs.go:282] 0 containers: []
	W1026 09:18:09.050130  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:09.050137  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:09.050199  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:09.093825  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:09.093849  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:09.093855  445201 cri.go:89] found id: ""
	I1026 09:18:09.093862  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:09.093920  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:09.099057  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:09.103668  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:09.103750  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:09.147581  445201 cri.go:89] found id: ""
	I1026 09:18:09.147606  445201 logs.go:282] 0 containers: []
	W1026 09:18:09.147614  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:09.147620  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:09.147678  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:09.179916  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:09.179944  445201 cri.go:89] found id: ""
	I1026 09:18:09.179954  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:09.180009  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:09.189349  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:09.189422  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:09.225146  445201 cri.go:89] found id: ""
	I1026 09:18:09.225174  445201 logs.go:282] 0 containers: []
	W1026 09:18:09.225183  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:09.225190  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:09.225251  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:09.262430  445201 cri.go:89] found id: ""
	I1026 09:18:09.262456  445201 logs.go:282] 0 containers: []
	W1026 09:18:09.262464  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:09.262479  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:09.262491  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:09.468576  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:09.468621  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:09.486577  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:09.486607  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:09.570372  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:09.570395  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:09.570407  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:09.664882  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:09.664922  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:09.696922  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:09.696954  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:09.746659  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:09.746695  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:09.861505  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:09.861584  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:09.903262  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:09.903292  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:09.994948  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:09.994986  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:12.559579  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:12.575798  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:12.575882  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:12.619478  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:12.619509  445201 cri.go:89] found id: ""
	I1026 09:18:12.619518  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:12.619586  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:12.625332  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:12.625421  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:12.675151  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:12.675175  445201 cri.go:89] found id: ""
	I1026 09:18:12.675190  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:12.675245  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:12.679325  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:12.679396  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:12.721306  445201 cri.go:89] found id: ""
	I1026 09:18:12.721335  445201 logs.go:282] 0 containers: []
	W1026 09:18:12.721343  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:12.721350  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:12.721431  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:12.769536  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:12.769561  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:12.769567  445201 cri.go:89] found id: ""
	I1026 09:18:12.769575  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:12.769680  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:12.775808  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:12.779629  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:12.779732  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:12.817150  445201 cri.go:89] found id: ""
	I1026 09:18:12.817176  445201 logs.go:282] 0 containers: []
	W1026 09:18:12.817186  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:12.817192  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:12.817272  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:12.854305  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:12.854329  445201 cri.go:89] found id: ""
	I1026 09:18:12.854338  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:12.854399  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:12.861472  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:12.861560  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:12.901948  445201 cri.go:89] found id: ""
	I1026 09:18:12.901975  445201 logs.go:282] 0 containers: []
	W1026 09:18:12.901983  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:12.901990  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:12.902102  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:12.933212  445201 cri.go:89] found id: ""
	I1026 09:18:12.933235  445201 logs.go:282] 0 containers: []
	W1026 09:18:12.933244  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:12.933284  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:12.933303  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:12.984193  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:12.984232  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:13.053951  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:13.053991  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:13.090952  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:13.090984  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:13.128562  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:13.128590  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:13.314515  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:13.314558  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:13.423834  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:13.423856  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:13.423869  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:13.457411  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:13.457444  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:13.554280  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:13.554321  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:13.572346  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:13.572496  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:16.197146  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:16.209584  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:16.209658  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:16.245036  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:16.245056  445201 cri.go:89] found id: ""
	I1026 09:18:16.245064  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:16.245121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:16.250101  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:16.250171  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:16.307352  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:16.307372  445201 cri.go:89] found id: ""
	I1026 09:18:16.307380  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:16.307440  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:16.311438  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:16.311504  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:16.353929  445201 cri.go:89] found id: ""
	I1026 09:18:16.353952  445201 logs.go:282] 0 containers: []
	W1026 09:18:16.353960  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:16.353967  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:16.354028  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:16.403376  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:16.403409  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:16.403414  445201 cri.go:89] found id: ""
	I1026 09:18:16.403421  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:16.403529  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:16.407588  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:16.411396  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:16.411463  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:16.449327  445201 cri.go:89] found id: ""
	I1026 09:18:16.449349  445201 logs.go:282] 0 containers: []
	W1026 09:18:16.449357  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:16.449363  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:16.449420  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:16.482505  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:16.482530  445201 cri.go:89] found id: ""
	I1026 09:18:16.482539  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:16.482594  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:16.486472  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:16.486615  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:16.525044  445201 cri.go:89] found id: ""
	I1026 09:18:16.525079  445201 logs.go:282] 0 containers: []
	W1026 09:18:16.525089  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:16.525096  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:16.525164  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:16.555406  445201 cri.go:89] found id: ""
	I1026 09:18:16.555471  445201 logs.go:282] 0 containers: []
	W1026 09:18:16.555492  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:16.555522  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:16.555558  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:16.593507  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:16.593531  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:16.682431  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:16.682450  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:16.682462  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:16.731438  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:16.731511  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:16.830969  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:16.831054  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:16.870203  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:16.870229  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:17.054320  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:17.054362  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:17.071708  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:17.071737  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:17.170122  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:17.170354  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:17.257381  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:17.257459  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:19.794265  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:19.810033  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:19.810146  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:19.863856  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:19.863920  445201 cri.go:89] found id: ""
	I1026 09:18:19.863946  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:19.864035  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:19.872584  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:19.872708  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:19.921531  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:19.921595  445201 cri.go:89] found id: ""
	I1026 09:18:19.921617  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:19.921707  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:19.929893  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:19.930015  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:19.986560  445201 cri.go:89] found id: ""
	I1026 09:18:19.986627  445201 logs.go:282] 0 containers: []
	W1026 09:18:19.986649  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:19.986669  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:19.986765  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:20.035075  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:20.035154  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:20.035173  445201 cri.go:89] found id: ""
	I1026 09:18:20.035196  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:20.035287  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:20.047581  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:20.052206  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:20.052335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:20.104837  445201 cri.go:89] found id: ""
	I1026 09:18:20.104908  445201 logs.go:282] 0 containers: []
	W1026 09:18:20.104932  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:20.104956  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:20.105043  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:20.142642  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:20.142737  445201 cri.go:89] found id: ""
	I1026 09:18:20.142764  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:20.142861  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:20.151498  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:20.151617  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:20.198982  445201 cri.go:89] found id: ""
	I1026 09:18:20.199060  445201 logs.go:282] 0 containers: []
	W1026 09:18:20.199084  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:20.199107  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:20.199199  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:20.245045  445201 cri.go:89] found id: ""
	I1026 09:18:20.245114  445201 logs.go:282] 0 containers: []
	W1026 09:18:20.245137  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:20.245164  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:20.245206  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:20.292344  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:20.292423  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:20.386980  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:20.387043  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:20.387070  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:20.506658  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:20.506757  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:20.618549  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:20.618590  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:20.850203  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:20.850235  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:20.898064  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:20.898146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:20.961043  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:20.961196  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:21.018591  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:21.018618  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:21.069809  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:21.069888  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:23.683761  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:23.705042  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:23.705111  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:23.758233  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:23.758253  445201 cri.go:89] found id: ""
	I1026 09:18:23.758262  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:23.758322  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:23.763700  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:23.763783  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:23.802485  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:23.802566  445201 cri.go:89] found id: ""
	I1026 09:18:23.802589  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:23.802687  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:23.807357  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:23.807485  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:23.844892  445201 cri.go:89] found id: ""
	I1026 09:18:23.844914  445201 logs.go:282] 0 containers: []
	W1026 09:18:23.844923  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:23.844928  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:23.844994  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:23.894253  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:23.894323  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:23.894331  445201 cri.go:89] found id: ""
	I1026 09:18:23.894339  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:23.894460  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:23.901410  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:23.905549  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:23.905677  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:23.935596  445201 cri.go:89] found id: ""
	I1026 09:18:23.935671  445201 logs.go:282] 0 containers: []
	W1026 09:18:23.935706  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:23.935741  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:23.935828  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:23.968385  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:23.968457  445201 cri.go:89] found id: ""
	I1026 09:18:23.968482  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:23.968581  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:23.974071  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:23.974197  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:24.010933  445201 cri.go:89] found id: ""
	I1026 09:18:24.011013  445201 logs.go:282] 0 containers: []
	W1026 09:18:24.011037  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:24.011061  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:24.011176  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:24.058904  445201 cri.go:89] found id: ""
	I1026 09:18:24.058978  445201 logs.go:282] 0 containers: []
	W1026 09:18:24.059001  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:24.059031  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:24.059074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:24.080999  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:24.081077  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:24.139598  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:24.139669  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:24.192994  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:24.193025  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:24.244812  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:24.244841  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:24.360650  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:24.360741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:24.414987  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:24.415118  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:24.644507  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:24.644541  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:24.732677  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:24.732702  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:24.732716  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:24.843139  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:24.843178  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:27.420858  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:27.435020  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:27.435093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:27.462527  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:27.462551  445201 cri.go:89] found id: ""
	I1026 09:18:27.462559  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:27.462614  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.466287  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:27.466360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:27.493839  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:27.493861  445201 cri.go:89] found id: ""
	I1026 09:18:27.493870  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:27.493927  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.497623  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:27.497752  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:27.526904  445201 cri.go:89] found id: ""
	I1026 09:18:27.526927  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.526936  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:27.526942  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:27.527003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:27.557509  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:27.557530  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:27.557535  445201 cri.go:89] found id: ""
	I1026 09:18:27.557542  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:27.557608  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.561397  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.565255  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:27.565338  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:27.594176  445201 cri.go:89] found id: ""
	I1026 09:18:27.594202  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.594211  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:27.594217  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:27.594279  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:27.623222  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:27.623248  445201 cri.go:89] found id: ""
	I1026 09:18:27.623258  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:27.623316  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.627247  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:27.627330  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:27.653051  445201 cri.go:89] found id: ""
	I1026 09:18:27.653078  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.653086  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:27.653092  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:27.653150  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:27.679559  445201 cri.go:89] found id: ""
	I1026 09:18:27.679586  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.679597  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:27.679610  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:27.679622  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:27.767257  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:27.767293  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:27.801763  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:27.801796  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:27.828368  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:27.828394  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:27.862093  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:27.862127  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:28.046689  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:28.046734  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:28.144179  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:28.144202  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:28.144215  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:28.210208  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:28.210291  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:28.241056  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:28.241082  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:28.333484  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:28.333563  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:30.853449  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:30.864358  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:30.864425  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:30.912189  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:30.912213  445201 cri.go:89] found id: ""
	I1026 09:18:30.912221  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:30.912274  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:30.916421  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:30.916490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:30.949024  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:30.949044  445201 cri.go:89] found id: ""
	I1026 09:18:30.949053  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:30.949106  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:30.953463  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:30.953531  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:30.980685  445201 cri.go:89] found id: ""
	I1026 09:18:30.980709  445201 logs.go:282] 0 containers: []
	W1026 09:18:30.980718  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:30.980724  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:30.980780  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:31.016133  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:31.016158  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:31.016173  445201 cri.go:89] found id: ""
	I1026 09:18:31.016181  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:31.016238  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.020939  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.025309  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:31.025386  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:31.060958  445201 cri.go:89] found id: ""
	I1026 09:18:31.060984  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.060993  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:31.060998  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:31.061055  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:31.101477  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:31.101502  445201 cri.go:89] found id: ""
	I1026 09:18:31.101510  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:31.101569  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.106479  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:31.106551  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:31.149271  445201 cri.go:89] found id: ""
	I1026 09:18:31.149299  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.149308  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:31.149314  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:31.149377  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:31.197370  445201 cri.go:89] found id: ""
	I1026 09:18:31.197396  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.197404  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:31.197417  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:31.197429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:31.267814  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:31.267847  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:31.306800  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:31.306828  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:31.355411  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:31.355446  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:31.594236  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:31.594284  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:31.707540  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:31.707562  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:31.707575  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:31.749062  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:31.749139  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:31.782579  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:31.782608  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:31.875835  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:31.875914  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:31.893388  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:31.893468  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:34.494834  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:34.505863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:34.505934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:34.533409  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:34.533434  445201 cri.go:89] found id: ""
	I1026 09:18:34.533444  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:34.533506  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.537148  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:34.537273  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:34.563259  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:34.563329  445201 cri.go:89] found id: ""
	I1026 09:18:34.563362  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:34.563444  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.567511  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:34.567635  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:34.598008  445201 cri.go:89] found id: ""
	I1026 09:18:34.598033  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.598043  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:34.598049  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:34.598106  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:34.625931  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:34.625955  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:34.625960  445201 cri.go:89] found id: ""
	I1026 09:18:34.625967  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:34.626023  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.629701  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.633096  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:34.633218  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:34.659117  445201 cri.go:89] found id: ""
	I1026 09:18:34.659144  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.659153  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:34.659160  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:34.659249  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:34.688692  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:34.688717  445201 cri.go:89] found id: ""
	I1026 09:18:34.688725  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:34.688779  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.692931  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:34.693020  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:34.721387  445201 cri.go:89] found id: ""
	I1026 09:18:34.721410  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.721418  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:34.721430  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:34.721486  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:34.767730  445201 cri.go:89] found id: ""
	I1026 09:18:34.767751  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.767760  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:34.767774  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:34.767787  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:34.806565  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:34.806596  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:35.067378  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:35.067476  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:35.094465  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:35.094551  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:35.215638  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:35.215715  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:35.316393  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:35.316482  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:35.371828  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:35.371854  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:35.482470  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:35.482497  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:35.668838  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:35.668926  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:35.890598  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:35.890623  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:35.890636  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:38.480272  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:38.490814  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:38.490909  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:38.519259  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:38.519283  445201 cri.go:89] found id: ""
	I1026 09:18:38.519292  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:38.519346  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.530260  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:38.530335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:38.557988  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:38.558011  445201 cri.go:89] found id: ""
	I1026 09:18:38.558019  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:38.558072  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.561801  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:38.561902  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:38.588725  445201 cri.go:89] found id: ""
	I1026 09:18:38.588754  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.588763  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:38.588769  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:38.588828  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:38.615132  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:38.615155  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:38.615161  445201 cri.go:89] found id: ""
	I1026 09:18:38.615169  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:38.615233  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.618898  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.622588  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:38.622678  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:38.647919  445201 cri.go:89] found id: ""
	I1026 09:18:38.647945  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.647954  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:38.647960  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:38.648043  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:38.683904  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:38.683927  445201 cri.go:89] found id: ""
	I1026 09:18:38.683935  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:38.683991  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.687728  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:38.687814  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:38.713813  445201 cri.go:89] found id: ""
	I1026 09:18:38.713891  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.713914  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:38.713936  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:38.714028  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:38.744811  445201 cri.go:89] found id: ""
	I1026 09:18:38.744835  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.744844  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:38.744859  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:38.744890  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:38.831387  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:38.831429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:38.858909  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:38.858939  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:38.889239  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:38.889270  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:39.072659  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:39.072738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:39.090892  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:39.091036  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:39.168978  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:39.169000  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:39.169013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:39.213394  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:39.213429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:39.303324  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:39.303405  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:39.334190  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:39.334221  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:41.936736  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:41.952312  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:41.952378  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:42.003484  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:42.003509  445201 cri.go:89] found id: ""
	I1026 09:18:42.003519  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:42.003593  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.009347  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:42.009506  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:42.041427  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:42.041447  445201 cri.go:89] found id: ""
	I1026 09:18:42.041455  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:42.041511  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.050442  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:42.050511  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:42.085278  445201 cri.go:89] found id: ""
	I1026 09:18:42.085302  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.085313  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:42.085321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:42.085390  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:42.135000  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:42.135039  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:42.135052  445201 cri.go:89] found id: ""
	I1026 09:18:42.135061  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:42.135142  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.143324  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.148842  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:42.148968  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:42.187227  445201 cri.go:89] found id: ""
	I1026 09:18:42.187260  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.187270  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:42.187277  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:42.187349  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:42.230590  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:42.230615  445201 cri.go:89] found id: ""
	I1026 09:18:42.230634  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:42.230696  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.237513  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:42.237587  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:42.273749  445201 cri.go:89] found id: ""
	I1026 09:18:42.273774  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.273783  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:42.273789  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:42.273853  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:42.317499  445201 cri.go:89] found id: ""
	I1026 09:18:42.317527  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.317537  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:42.317550  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:42.317564  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:42.428824  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:42.428862  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:42.462647  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:42.462675  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:42.496373  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:42.496403  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:42.514247  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:42.514278  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:42.601636  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:42.601658  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:42.601675  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:42.652567  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:42.652602  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:42.728043  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:42.728151  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:42.835698  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:42.835733  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:42.890013  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:42.890087  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:45.632211  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:45.645366  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:45.645433  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:45.681113  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:45.681140  445201 cri.go:89] found id: ""
	I1026 09:18:45.681148  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:45.681201  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.686769  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:45.686843  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:45.717975  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:45.717999  445201 cri.go:89] found id: ""
	I1026 09:18:45.718008  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:45.718067  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.721640  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:45.721709  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:45.756101  445201 cri.go:89] found id: ""
	I1026 09:18:45.756130  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.756140  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:45.756147  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:45.756205  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:45.787128  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:45.787153  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:45.787159  445201 cri.go:89] found id: ""
	I1026 09:18:45.787166  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:45.787221  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.791689  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.795274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:45.795360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:45.829769  445201 cri.go:89] found id: ""
	I1026 09:18:45.829798  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.829806  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:45.829812  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:45.829873  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:45.862647  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:45.862673  445201 cri.go:89] found id: ""
	I1026 09:18:45.862682  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:45.862763  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.867860  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:45.867935  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:45.903841  445201 cri.go:89] found id: ""
	I1026 09:18:45.903870  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.903879  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:45.903892  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:45.903952  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:45.931348  445201 cri.go:89] found id: ""
	I1026 09:18:45.931377  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.931386  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:45.931401  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:45.931412  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:45.976805  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:45.976843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:46.051944  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:46.051978  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:46.086154  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:46.086182  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:46.180461  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:46.180500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:46.242681  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:46.242729  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:46.335607  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:46.335630  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:46.335644  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:46.372229  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:46.372259  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:46.576414  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:46.576452  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:46.593353  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:46.593384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:49.196673  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:49.208768  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:49.208834  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:49.237992  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:49.238012  445201 cri.go:89] found id: ""
	I1026 09:18:49.238026  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:49.238078  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.242232  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:49.242301  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:49.274115  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:49.274134  445201 cri.go:89] found id: ""
	I1026 09:18:49.274141  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:49.274196  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.278751  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:49.278824  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:49.316174  445201 cri.go:89] found id: ""
	I1026 09:18:49.316247  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.316268  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:49.316286  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:49.316376  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:49.364077  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:49.364097  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:49.364102  445201 cri.go:89] found id: ""
	I1026 09:18:49.364110  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:49.364167  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.368007  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.371900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:49.371969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:49.480960  445201 cri.go:89] found id: ""
	I1026 09:18:49.480983  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.480991  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:49.480997  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:49.481051  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:49.573122  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:49.573141  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:49.573146  445201 cri.go:89] found id: ""
	I1026 09:18:49.573153  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:49.573207  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.582516  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.586286  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:49.586360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:49.653326  445201 cri.go:89] found id: ""
	I1026 09:18:49.653348  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.653356  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:49.653362  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:49.653419  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:49.753083  445201 cri.go:89] found id: ""
	I1026 09:18:49.753104  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.753112  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:49.753121  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:49.753133  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:49.868436  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:49.868509  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:50.107262  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:50.107333  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:50.107360  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:50.218345  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:50.218421  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:50.300338  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:50.300415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:50.451266  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:50.451352  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:50.702844  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:50.702924  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:50.761113  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:50.761141  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:50.992387  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:50.992470  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:51.164309  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:51.164354  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:51.244398  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:51.244428  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:53.918830  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:53.931665  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:53.931737  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:53.965110  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:53.965128  445201 cri.go:89] found id: ""
	I1026 09:18:53.965136  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:53.965189  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:53.969286  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:53.969364  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:53.998070  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:53.998148  445201 cri.go:89] found id: ""
	I1026 09:18:53.998172  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:53.998300  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.009478  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:54.009549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:54.057681  445201 cri.go:89] found id: ""
	I1026 09:18:54.057704  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.057713  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:54.057719  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:54.057779  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:54.096667  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:54.096686  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:54.096690  445201 cri.go:89] found id: ""
	I1026 09:18:54.096697  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:54.096754  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.100714  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.106334  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:54.106412  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:54.142903  445201 cri.go:89] found id: ""
	I1026 09:18:54.142932  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.142942  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:54.142949  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:54.143009  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:54.178911  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:54.178941  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:54.178947  445201 cri.go:89] found id: ""
	I1026 09:18:54.178957  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:54.179025  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.183501  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.187589  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:54.187669  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:54.221295  445201 cri.go:89] found id: ""
	I1026 09:18:54.221331  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.221339  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:54.221347  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:54.221418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:54.257120  445201 cri.go:89] found id: ""
	I1026 09:18:54.257156  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.257165  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:54.257175  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:54.257187  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:54.274116  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:54.274152  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:54.367913  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:54.367953  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:54.424863  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:54.424897  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:54.522786  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:54.522824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:54.558996  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:54.559027  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:54.761276  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:54.761314  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:54.842974  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:54.842998  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:54.843013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:54.883224  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:54.883258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:54.959058  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:54.959095  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:54.990575  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:54.990606  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:57.523986  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:57.540088  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:57.540158  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:57.588980  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:57.589003  445201 cri.go:89] found id: ""
	I1026 09:18:57.589011  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:57.589068  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.592926  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:57.593004  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:57.636503  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:57.636526  445201 cri.go:89] found id: ""
	I1026 09:18:57.636535  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:57.636593  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.641341  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:57.641415  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:57.685033  445201 cri.go:89] found id: ""
	I1026 09:18:57.685059  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.685068  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:57.685075  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:57.685131  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:57.711978  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:57.712001  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:57.712007  445201 cri.go:89] found id: ""
	I1026 09:18:57.712014  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:57.712080  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.715873  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.719254  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:57.719321  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:57.753685  445201 cri.go:89] found id: ""
	I1026 09:18:57.753710  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.753718  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:57.753725  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:57.753778  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:57.784895  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:57.784916  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:57.784921  445201 cri.go:89] found id: ""
	I1026 09:18:57.784929  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:57.784983  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.788971  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.792746  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:57.792820  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:57.840896  445201 cri.go:89] found id: ""
	I1026 09:18:57.840923  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.840933  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:57.840939  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:57.841003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:57.880935  445201 cri.go:89] found id: ""
	I1026 09:18:57.880960  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.880969  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:57.880978  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:57.880990  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:58.119061  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:58.119100  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:58.144002  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:58.144032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:58.272226  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:58.272262  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:58.338046  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:58.338081  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:58.468726  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:58.468824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:58.537377  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:58.537418  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:58.581358  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:58.581384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:58.626891  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:58.626920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:58.765686  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:58.765706  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:58.765720  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:58.802844  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:58.802873  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:01.423814  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:01.435792  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:01.435861  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:01.469223  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:01.469246  445201 cri.go:89] found id: ""
	I1026 09:19:01.469255  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:01.469314  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.473717  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:01.473784  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:01.507729  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:01.507750  445201 cri.go:89] found id: ""
	I1026 09:19:01.507759  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:01.507814  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.512111  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:01.512179  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:01.540080  445201 cri.go:89] found id: ""
	I1026 09:19:01.540105  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.540119  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:01.540126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:01.540182  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:01.568546  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:01.568565  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:01.568570  445201 cri.go:89] found id: ""
	I1026 09:19:01.568577  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:01.568628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.572626  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.576629  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:01.576695  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:01.605879  445201 cri.go:89] found id: ""
	I1026 09:19:01.605903  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.605911  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:01.605918  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:01.605973  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:01.670518  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:01.670540  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:19:01.670546  445201 cri.go:89] found id: ""
	I1026 09:19:01.670565  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:19:01.670628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.683230  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.687508  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:01.687586  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:01.744856  445201 cri.go:89] found id: ""
	I1026 09:19:01.744881  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.744891  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:01.744903  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:01.744969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:01.809189  445201 cri.go:89] found id: ""
	I1026 09:19:01.809215  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.809226  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:01.809235  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:01.809259  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:01.853086  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:01.853113  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:01.952100  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:01.952140  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:01.971357  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:01.971388  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:02.113349  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:02.113389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:02.189865  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:02.189898  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:02.222701  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:19:02.222762  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:19:02.251641  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:02.251668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:02.305611  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:02.305673  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:02.502516  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:02.502556  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:02.604316  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:02.604337  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:02.604349  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.157227  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:05.172427  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:05.172497  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:05.212673  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:05.212695  445201 cri.go:89] found id: ""
	I1026 09:19:05.212703  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:05.212762  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.218913  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:05.218988  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:05.266883  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.266951  445201 cri.go:89] found id: ""
	I1026 09:19:05.266976  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:05.267067  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.273271  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:05.273341  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:05.328492  445201 cri.go:89] found id: ""
	I1026 09:19:05.328515  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.328524  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:05.328530  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:05.328589  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:05.361661  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:05.361681  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:05.361686  445201 cri.go:89] found id: ""
	I1026 09:19:05.361693  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:05.361750  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.366096  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.370154  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:05.370277  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:05.429405  445201 cri.go:89] found id: ""
	I1026 09:19:05.429428  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.429438  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:05.429444  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:05.429503  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:05.459155  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:05.459177  445201 cri.go:89] found id: ""
	I1026 09:19:05.459185  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:05.459251  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.463412  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:05.463490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:05.497816  445201 cri.go:89] found id: ""
	I1026 09:19:05.497898  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.497922  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:05.497944  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:05.498044  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:05.547135  445201 cri.go:89] found id: ""
	I1026 09:19:05.547157  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.547165  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:05.547180  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:05.547191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:05.765529  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:05.765619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:05.897332  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:05.897415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.942114  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:05.942147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:06.026733  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:06.026772  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:06.058485  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:06.058517  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:06.142317  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:06.142354  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:06.173836  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:06.173866  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:06.191159  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:06.191191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:06.262250  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:06.262269  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:06.262283  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:08.793456  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:08.806010  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:08.806104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:08.832478  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:08.832501  445201 cri.go:89] found id: ""
	I1026 09:19:08.832510  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:08.832592  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.836764  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:08.836886  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:08.864680  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:08.864703  445201 cri.go:89] found id: ""
	I1026 09:19:08.864713  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:08.864790  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.868822  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:08.868922  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:08.915861  445201 cri.go:89] found id: ""
	I1026 09:19:08.915885  445201 logs.go:282] 0 containers: []
	W1026 09:19:08.915894  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:08.915900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:08.915956  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:08.958216  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:08.958236  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:08.958241  445201 cri.go:89] found id: ""
	I1026 09:19:08.958248  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:08.958304  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.964680  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.969953  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:08.970071  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:09.018018  445201 cri.go:89] found id: ""
	I1026 09:19:09.018107  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.018130  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:09.018163  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:09.018284  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:09.059907  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:09.059987  445201 cri.go:89] found id: ""
	I1026 09:19:09.060010  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:09.060116  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:09.065304  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:09.065406  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:09.103164  445201 cri.go:89] found id: ""
	I1026 09:19:09.103189  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.103198  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:09.103205  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:09.103317  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:09.138529  445201 cri.go:89] found id: ""
	I1026 09:19:09.138555  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.138564  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:09.138579  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:09.138612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:09.228162  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:09.228184  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:09.228200  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:09.297498  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:09.297576  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:09.330960  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:09.331040  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:09.358609  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:09.358678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:09.411374  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:09.411410  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:09.646481  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:09.646521  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:09.663058  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:09.663088  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:09.752277  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:09.752366  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:09.793332  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:09.793367  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:12.383633  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:12.401752  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:12.401839  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:12.438665  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:12.438686  445201 cri.go:89] found id: ""
	I1026 09:19:12.438694  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:12.438783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.443593  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:12.443675  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:12.472091  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:12.472111  445201 cri.go:89] found id: ""
	I1026 09:19:12.472120  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:12.472180  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.475808  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:12.475886  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:12.514267  445201 cri.go:89] found id: ""
	I1026 09:19:12.514294  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.514304  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:12.514309  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:12.514366  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:12.542080  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:12.542104  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:12.542110  445201 cri.go:89] found id: ""
	I1026 09:19:12.542117  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:12.542174  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.546039  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.549615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:12.549723  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:12.581223  445201 cri.go:89] found id: ""
	I1026 09:19:12.581250  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.581260  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:12.581266  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:12.581356  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:12.607499  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:12.607534  445201 cri.go:89] found id: ""
	I1026 09:19:12.607543  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:12.607617  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.611325  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:12.611402  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:12.641240  445201 cri.go:89] found id: ""
	I1026 09:19:12.641265  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.641275  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:12.641281  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:12.641337  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:12.667996  445201 cri.go:89] found id: ""
	I1026 09:19:12.668022  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.668031  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:12.668052  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:12.668067  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:12.684406  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:12.684484  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:12.724395  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:12.724427  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:12.789568  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:12.789604  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:12.875805  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:12.875841  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:12.951166  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:12.951186  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:12.951198  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:13.039548  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:13.039585  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:13.068138  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:13.068168  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:13.097855  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:13.097884  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:13.131626  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:13.131654  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:15.821288  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:15.832161  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:15.832241  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:15.861216  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:15.861236  445201 cri.go:89] found id: ""
	I1026 09:19:15.861244  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:15.861297  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.865258  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:15.865335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:15.896097  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:15.896120  445201 cri.go:89] found id: ""
	I1026 09:19:15.896129  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:15.896210  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.899835  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:15.899910  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:15.926309  445201 cri.go:89] found id: ""
	I1026 09:19:15.926336  445201 logs.go:282] 0 containers: []
	W1026 09:19:15.926345  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:15.926351  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:15.926409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:15.952777  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:15.952801  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:15.952806  445201 cri.go:89] found id: ""
	I1026 09:19:15.952812  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:15.952870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.956680  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.960135  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:15.960205  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:15.990023  445201 cri.go:89] found id: ""
	I1026 09:19:15.990049  445201 logs.go:282] 0 containers: []
	W1026 09:19:15.990058  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:15.990064  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:15.990128  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:16.021736  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:16.021812  445201 cri.go:89] found id: ""
	I1026 09:19:16.021845  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:16.021940  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:16.025743  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:16.025814  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:16.053000  445201 cri.go:89] found id: ""
	I1026 09:19:16.053027  445201 logs.go:282] 0 containers: []
	W1026 09:19:16.053037  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:16.053043  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:16.053104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:16.080003  445201 cri.go:89] found id: ""
	I1026 09:19:16.080063  445201 logs.go:282] 0 containers: []
	W1026 09:19:16.080072  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:16.080087  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:16.080104  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:16.264436  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:16.264474  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:16.332840  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:16.332859  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:16.332876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:16.415544  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:16.415580  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:16.451127  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:16.451154  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:16.467646  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:16.467674  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:16.558547  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:16.558590  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:16.603633  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:16.603718  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:16.695146  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:16.695186  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:16.724106  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:16.724133  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:19.254855  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:19.265829  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:19.265901  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:19.295749  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:19.295772  445201 cri.go:89] found id: ""
	I1026 09:19:19.295780  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:19.295834  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.299585  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:19.299655  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:19.325212  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:19.325233  445201 cri.go:89] found id: ""
	I1026 09:19:19.325242  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:19.325298  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.328922  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:19.328992  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:19.356309  445201 cri.go:89] found id: ""
	I1026 09:19:19.356334  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.356342  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:19.356352  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:19.356411  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:19.384013  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:19.384044  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:19.384049  445201 cri.go:89] found id: ""
	I1026 09:19:19.384056  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:19.384115  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.392486  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.396329  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:19.396402  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:19.424188  445201 cri.go:89] found id: ""
	I1026 09:19:19.424218  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.424227  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:19.424233  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:19.424313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:19.452797  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:19.452836  445201 cri.go:89] found id: ""
	I1026 09:19:19.452845  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:19.452959  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.456593  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:19.456688  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:19.481805  445201 cri.go:89] found id: ""
	I1026 09:19:19.481832  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.481841  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:19.481848  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:19.481908  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:19.522358  445201 cri.go:89] found id: ""
	I1026 09:19:19.522383  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.522391  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:19.522406  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:19.522421  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:19.616560  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:19.616601  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:19.651182  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:19.651248  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:19.679075  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:19.679105  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:19.711689  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:19.711717  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:19.902761  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:19.902799  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:19.920018  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:19.920109  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:19.991407  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:19.991449  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:20.025552  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:20.025583  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:20.111969  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:20.112009  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:20.183281  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:22.683932  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:22.694857  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:22.694924  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:22.723534  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:22.723570  445201 cri.go:89] found id: ""
	I1026 09:19:22.723579  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:22.723652  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.727309  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:22.727383  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:22.755329  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:22.755351  445201 cri.go:89] found id: ""
	I1026 09:19:22.755359  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:22.755418  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.759291  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:22.759367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:22.786028  445201 cri.go:89] found id: ""
	I1026 09:19:22.786054  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.786062  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:22.786068  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:22.786127  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:22.814234  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:22.814255  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:22.814260  445201 cri.go:89] found id: ""
	I1026 09:19:22.814267  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:22.814329  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.818126  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.821666  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:22.821738  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:22.849444  445201 cri.go:89] found id: ""
	I1026 09:19:22.849467  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.849475  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:22.849481  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:22.849541  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:22.881184  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:22.881204  445201 cri.go:89] found id: ""
	I1026 09:19:22.881212  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:22.881269  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.885355  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:22.885474  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:22.915425  445201 cri.go:89] found id: ""
	I1026 09:19:22.915449  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.915459  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:22.915465  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:22.915521  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:22.945808  445201 cri.go:89] found id: ""
	I1026 09:19:22.945835  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.945845  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:22.945861  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:22.945872  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:22.981028  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:22.981063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:23.062992  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:23.063030  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:23.095047  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:23.095074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:23.176099  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:23.176135  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:23.207189  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:23.207220  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:23.224316  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:23.224349  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:23.254199  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:23.254229  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:23.445110  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:23.445146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:23.521902  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:23.521925  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:23.521938  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.118456  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:26.129332  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:26.129409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:26.156225  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.156252  445201 cri.go:89] found id: ""
	I1026 09:19:26.156261  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:26.156318  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.160359  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:26.160433  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:26.187545  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:26.187623  445201 cri.go:89] found id: ""
	I1026 09:19:26.187646  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:26.187734  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.191452  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:26.191535  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:26.220053  445201 cri.go:89] found id: ""
	I1026 09:19:26.220083  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.220092  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:26.220098  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:26.220157  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:26.247035  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:26.247061  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:26.247066  445201 cri.go:89] found id: ""
	I1026 09:19:26.247074  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:26.247128  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.251068  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.254617  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:26.254791  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:26.280418  445201 cri.go:89] found id: ""
	I1026 09:19:26.280444  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.280453  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:26.280460  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:26.280540  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:26.307133  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:26.307155  445201 cri.go:89] found id: ""
	I1026 09:19:26.307164  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:26.307218  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.310730  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:26.310805  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:26.338848  445201 cri.go:89] found id: ""
	I1026 09:19:26.338933  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.338965  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:26.338990  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:26.339074  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:26.368727  445201 cri.go:89] found id: ""
	I1026 09:19:26.368802  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.368817  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:26.368835  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:26.368846  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:26.557443  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:26.557484  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:26.626887  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:26.626911  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:26.626938  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.715465  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:26.715506  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:26.749738  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:26.749773  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:26.817294  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:26.817328  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:26.845295  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:26.845320  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:26.927592  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:26.927623  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:26.958755  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:26.958784  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:26.975590  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:26.975619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:29.505512  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:29.517231  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:29.517298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:29.549241  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:29.549265  445201 cri.go:89] found id: ""
	I1026 09:19:29.549284  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:29.549352  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.553401  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:29.553473  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:29.585756  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:29.585779  445201 cri.go:89] found id: ""
	I1026 09:19:29.585787  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:29.585855  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.589981  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:29.590059  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:29.620889  445201 cri.go:89] found id: ""
	I1026 09:19:29.620976  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.621000  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:29.621030  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:29.621108  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:29.656684  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:29.656707  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:29.656712  445201 cri.go:89] found id: ""
	I1026 09:19:29.656720  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:29.656775  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.660970  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.664791  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:29.664911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:29.692679  445201 cri.go:89] found id: ""
	I1026 09:19:29.692706  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.692715  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:29.692722  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:29.692780  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:29.723291  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:29.723314  445201 cri.go:89] found id: ""
	I1026 09:19:29.723323  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:29.723384  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.727266  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:29.727382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:29.764602  445201 cri.go:89] found id: ""
	I1026 09:19:29.764625  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.764634  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:29.764641  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:29.764698  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:29.791509  445201 cri.go:89] found id: ""
	I1026 09:19:29.791541  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.791551  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:29.791570  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:29.791581  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:29.986761  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:29.986809  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:30.021126  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:30.021163  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:30.115835  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:30.115876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:30.206256  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:30.206351  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:30.276098  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:30.276165  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:30.276185  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:30.378640  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:30.378680  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:30.418069  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:30.418104  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:30.453863  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:30.453897  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:30.485272  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:30.485304  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:33.030943  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:33.043496  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:33.043569  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:33.072212  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:33.072236  445201 cri.go:89] found id: ""
	I1026 09:19:33.072244  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:33.072323  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.076809  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:33.076918  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:33.104618  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:33.104685  445201 cri.go:89] found id: ""
	I1026 09:19:33.104707  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:33.104785  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.108546  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:33.108616  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:33.135652  445201 cri.go:89] found id: ""
	I1026 09:19:33.135691  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.135700  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:33.135707  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:33.135774  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:33.168721  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:33.168744  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:33.168750  445201 cri.go:89] found id: ""
	I1026 09:19:33.168757  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:33.168812  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.172699  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.176639  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:33.176717  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:33.207086  445201 cri.go:89] found id: ""
	I1026 09:19:33.207111  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.207120  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:33.207126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:33.207186  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:33.234147  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:33.234171  445201 cri.go:89] found id: ""
	I1026 09:19:33.234182  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:33.234237  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.238290  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:33.238365  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:33.265342  445201 cri.go:89] found id: ""
	I1026 09:19:33.265379  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.265388  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:33.265394  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:33.265496  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:33.291779  445201 cri.go:89] found id: ""
	I1026 09:19:33.291806  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.291814  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:33.291829  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:33.291842  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:33.326396  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:33.326429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:33.398166  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:33.398201  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:33.436776  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:33.436804  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:33.455157  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:33.455187  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:33.550806  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:33.550829  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:33.550843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:33.658497  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:33.658534  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:33.690020  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:33.690046  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:33.719896  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:33.719927  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:33.806073  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:33.806113  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:36.509281  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:36.520622  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:36.520698  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:36.546223  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:36.546246  445201 cri.go:89] found id: ""
	I1026 09:19:36.546254  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:36.546310  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.549997  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:36.550075  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:36.583408  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:36.583435  445201 cri.go:89] found id: ""
	I1026 09:19:36.583444  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:36.583508  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.587184  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:36.587252  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:36.614029  445201 cri.go:89] found id: ""
	I1026 09:19:36.614052  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.614060  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:36.614067  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:36.614124  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:36.646229  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:36.646249  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:36.646254  445201 cri.go:89] found id: ""
	I1026 09:19:36.646262  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:36.646314  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.650036  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.653367  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:36.653435  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:36.678532  445201 cri.go:89] found id: ""
	I1026 09:19:36.678555  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.678565  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:36.678571  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:36.678627  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:36.706608  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:36.706632  445201 cri.go:89] found id: ""
	I1026 09:19:36.706650  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:36.706733  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.710435  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:36.710503  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:36.741337  445201 cri.go:89] found id: ""
	I1026 09:19:36.741363  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.741372  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:36.741378  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:36.741440  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:36.770349  445201 cri.go:89] found id: ""
	I1026 09:19:36.770376  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.770385  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:36.770404  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:36.770415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:36.956331  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:36.956374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:37.033188  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:37.033220  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:37.033232  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:37.069780  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:37.069816  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:37.097772  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:37.097802  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:37.125526  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:37.125553  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:37.209767  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:37.209804  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:37.227966  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:37.227996  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:37.318972  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:37.319009  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:37.389310  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:37.389347  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:39.920329  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:39.934448  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:39.934519  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:39.961696  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:39.961730  445201 cri.go:89] found id: ""
	I1026 09:19:39.961739  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:39.961797  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:39.966091  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:39.966171  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:39.994339  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:39.994363  445201 cri.go:89] found id: ""
	I1026 09:19:39.994371  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:39.994429  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:39.998234  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:39.998326  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:40.038519  445201 cri.go:89] found id: ""
	I1026 09:19:40.038548  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.038557  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:40.038563  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:40.038645  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:40.073675  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:40.073700  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:40.073706  445201 cri.go:89] found id: ""
	I1026 09:19:40.073714  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:40.073772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.077812  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.081791  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:40.081865  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:40.112934  445201 cri.go:89] found id: ""
	I1026 09:19:40.112961  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.112976  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:40.112983  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:40.113048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:40.141045  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:40.141066  445201 cri.go:89] found id: ""
	I1026 09:19:40.141074  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:40.141157  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.145319  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:40.145398  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:40.175080  445201 cri.go:89] found id: ""
	I1026 09:19:40.175106  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.175114  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:40.175120  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:40.175222  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:40.222953  445201 cri.go:89] found id: ""
	I1026 09:19:40.222977  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.222986  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:40.223000  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:40.223010  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:40.240693  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:40.240723  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:40.311992  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:40.312012  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:40.312081  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:40.343513  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:40.343544  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:40.375636  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:40.375706  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:40.573545  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:40.573587  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:40.665188  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:40.665275  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:40.706956  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:40.706990  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:40.778824  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:40.778859  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:40.871288  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:40.871333  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:43.403814  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:43.417134  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:43.417234  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:43.451712  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:43.451735  445201 cri.go:89] found id: ""
	I1026 09:19:43.451744  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:43.451805  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.456597  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:43.456744  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:43.487598  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:43.487621  445201 cri.go:89] found id: ""
	I1026 09:19:43.487630  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:43.487687  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.491554  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:43.491635  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:43.524925  445201 cri.go:89] found id: ""
	I1026 09:19:43.524950  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.524959  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:43.524966  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:43.525025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:43.559384  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:43.559409  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:43.559418  445201 cri.go:89] found id: ""
	I1026 09:19:43.559426  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:43.559505  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.563510  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.567393  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:43.567490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:43.595768  445201 cri.go:89] found id: ""
	I1026 09:19:43.595796  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.595805  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:43.595811  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:43.595869  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:43.629403  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:43.629425  445201 cri.go:89] found id: ""
	I1026 09:19:43.629433  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:43.629511  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.633896  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:43.634000  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:43.662513  445201 cri.go:89] found id: ""
	I1026 09:19:43.662550  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.662560  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:43.662566  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:43.662632  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:43.691667  445201 cri.go:89] found id: ""
	I1026 09:19:43.691694  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.691704  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:43.691720  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:43.691731  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:43.889884  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:43.889922  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:43.907194  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:43.907226  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:43.996451  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:43.996485  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:44.034823  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:44.034858  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:44.124289  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:44.124329  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:44.155342  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:44.155374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:44.228174  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:44.228197  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:44.228210  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:44.312702  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:44.312735  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:44.345172  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:44.345199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:46.872711  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:46.883842  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:46.883911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:46.914222  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:46.914244  445201 cri.go:89] found id: ""
	I1026 09:19:46.914252  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:46.914312  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:46.918073  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:46.918149  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:46.944474  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:46.944499  445201 cri.go:89] found id: ""
	I1026 09:19:46.944507  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:46.944563  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:46.948640  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:46.948760  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:46.975889  445201 cri.go:89] found id: ""
	I1026 09:19:46.975915  445201 logs.go:282] 0 containers: []
	W1026 09:19:46.975924  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:46.975930  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:46.975986  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:47.003949  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:47.003974  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:47.003979  445201 cri.go:89] found id: ""
	I1026 09:19:47.003987  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:47.004060  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.008583  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.012475  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:47.012598  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:47.038419  445201 cri.go:89] found id: ""
	I1026 09:19:47.038502  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.038535  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:47.038564  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:47.038654  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:47.067475  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:47.067540  445201 cri.go:89] found id: ""
	I1026 09:19:47.067563  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:47.067656  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.072094  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:47.072190  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:47.099129  445201 cri.go:89] found id: ""
	I1026 09:19:47.099155  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.099163  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:47.099169  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:47.099267  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:47.130045  445201 cri.go:89] found id: ""
	I1026 09:19:47.130112  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.130136  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:47.130169  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:47.130199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:47.157890  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:47.157921  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:47.242542  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:47.242581  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:47.447551  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:47.447632  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:47.465004  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:47.465079  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:47.548858  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:47.548923  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:47.548950  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:47.646494  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:47.646529  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:47.686085  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:47.686117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:47.713578  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:47.713608  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:47.742696  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:47.742763  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:50.319936  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:50.331143  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:50.331216  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:50.357915  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:50.357939  445201 cri.go:89] found id: ""
	I1026 09:19:50.357947  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:50.358003  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.361808  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:50.361884  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:50.393021  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:50.393042  445201 cri.go:89] found id: ""
	I1026 09:19:50.393051  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:50.393104  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.396989  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:50.397060  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:50.424294  445201 cri.go:89] found id: ""
	I1026 09:19:50.424319  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.424328  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:50.424335  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:50.424395  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:50.450782  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:50.450802  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:50.450807  445201 cri.go:89] found id: ""
	I1026 09:19:50.450814  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:50.450870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.454550  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.458056  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:50.458158  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:50.485349  445201 cri.go:89] found id: ""
	I1026 09:19:50.485424  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.485447  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:50.485469  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:50.485558  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:50.521298  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:50.521363  445201 cri.go:89] found id: ""
	I1026 09:19:50.521385  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:50.521471  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.525189  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:50.525285  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:50.553967  445201 cri.go:89] found id: ""
	I1026 09:19:50.553992  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.554001  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:50.554008  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:50.554096  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:50.582250  445201 cri.go:89] found id: ""
	I1026 09:19:50.582274  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.582283  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:50.582314  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:50.582336  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:50.599045  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:50.599077  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:50.691417  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:50.691478  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:50.691508  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:50.763525  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:50.763920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:50.795943  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:50.795968  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:50.878180  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:50.878219  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:50.921117  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:50.921146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:51.123124  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:51.123165  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:51.231502  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:51.231540  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:51.305423  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:51.305457  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:53.834574  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:53.848958  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:53.849029  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:53.876821  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:53.876841  445201 cri.go:89] found id: ""
	I1026 09:19:53.876849  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:53.876917  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.880939  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:53.881059  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:53.910223  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:53.910246  445201 cri.go:89] found id: ""
	I1026 09:19:53.910255  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:53.910316  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.914035  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:53.914114  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:53.940967  445201 cri.go:89] found id: ""
	I1026 09:19:53.940992  445201 logs.go:282] 0 containers: []
	W1026 09:19:53.941000  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:53.941006  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:53.941063  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:53.970066  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:53.970087  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:53.970103  445201 cri.go:89] found id: ""
	I1026 09:19:53.970112  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:53.970168  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.974073  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.977507  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:53.977582  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:54.006522  445201 cri.go:89] found id: ""
	I1026 09:19:54.006548  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.006557  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:54.006563  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:54.006628  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:54.036193  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:54.036214  445201 cri.go:89] found id: ""
	I1026 09:19:54.036222  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:54.036282  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:54.040972  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:54.041076  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:54.072998  445201 cri.go:89] found id: ""
	I1026 09:19:54.073024  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.073033  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:54.073039  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:54.073099  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:54.104988  445201 cri.go:89] found id: ""
	I1026 09:19:54.105024  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.105033  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:54.105052  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:54.105063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:54.307387  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:54.307426  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:54.382000  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:54.382026  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:54.382041  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:54.476834  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:54.476903  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:54.533817  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:54.533854  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:54.562847  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:54.562932  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:54.594186  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:54.594216  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:54.611803  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:54.611835  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:54.696633  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:54.696668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:54.724258  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:54.724285  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:57.314536  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:57.325708  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:57.325832  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:57.352680  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:57.352702  445201 cri.go:89] found id: ""
	I1026 09:19:57.352710  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:57.352768  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.356627  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:57.356710  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:57.396056  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:57.396078  445201 cri.go:89] found id: ""
	I1026 09:19:57.396086  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:57.396176  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.400072  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:57.400145  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:57.427812  445201 cri.go:89] found id: ""
	I1026 09:19:57.427841  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.427850  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:57.427857  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:57.427917  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:57.456429  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:57.456452  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:57.456458  445201 cri.go:89] found id: ""
	I1026 09:19:57.456467  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:57.456525  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.460645  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.464436  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:57.464508  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:57.491704  445201 cri.go:89] found id: ""
	I1026 09:19:57.491741  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.491750  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:57.491756  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:57.491827  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:57.527829  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:57.527853  445201 cri.go:89] found id: ""
	I1026 09:19:57.527861  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:57.527940  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.531724  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:57.531837  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:57.560862  445201 cri.go:89] found id: ""
	I1026 09:19:57.560889  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.560897  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:57.560903  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:57.560963  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:57.587217  445201 cri.go:89] found id: ""
	I1026 09:19:57.587293  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.587307  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:57.587323  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:57.587334  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:57.672410  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:57.672448  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:57.865888  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:57.865929  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:57.897311  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:57.897340  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:57.914172  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:57.914202  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:57.988120  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:57.988141  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:57.988154  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:58.089631  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:58.089672  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:58.126451  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:58.126482  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:58.201184  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:58.201222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:58.231003  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:58.231030  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:00.759063  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:00.772489  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:00.772571  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:00.802524  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:00.802602  445201 cri.go:89] found id: ""
	I1026 09:20:00.802623  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:00.802738  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.807352  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:00.807463  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:00.837109  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:00.837132  445201 cri.go:89] found id: ""
	I1026 09:20:00.837140  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:00.837201  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.841411  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:00.841639  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:00.871396  445201 cri.go:89] found id: ""
	I1026 09:20:00.871423  445201 logs.go:282] 0 containers: []
	W1026 09:20:00.871431  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:00.871438  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:00.871543  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:00.899767  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:00.899793  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:00.899798  445201 cri.go:89] found id: ""
	I1026 09:20:00.899806  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:00.899865  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.904037  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.908205  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:00.908287  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:00.943720  445201 cri.go:89] found id: ""
	I1026 09:20:00.943800  445201 logs.go:282] 0 containers: []
	W1026 09:20:00.943823  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:00.943845  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:00.943938  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:00.972167  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:00.972189  445201 cri.go:89] found id: ""
	I1026 09:20:00.972197  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:00.972304  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.976224  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:00.976301  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:01.008110  445201 cri.go:89] found id: ""
	I1026 09:20:01.008144  445201 logs.go:282] 0 containers: []
	W1026 09:20:01.008153  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:01.008159  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:01.008235  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:01.035472  445201 cri.go:89] found id: ""
	I1026 09:20:01.035514  445201 logs.go:282] 0 containers: []
	W1026 09:20:01.035522  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:01.035553  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:01.035574  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:01.053806  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:01.053888  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:01.143967  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:01.144012  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:01.181616  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:01.181654  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:01.257186  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:01.257222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:01.289204  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:01.289233  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:01.317688  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:01.317720  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:01.399424  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:01.399466  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:01.473543  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:01.473564  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:01.473577  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:01.516832  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:01.516871  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:04.205978  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:04.217084  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:04.217192  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:04.246438  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:04.246514  445201 cri.go:89] found id: ""
	I1026 09:20:04.246535  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:04.246609  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.253527  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:04.253653  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:04.281030  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:04.281108  445201 cri.go:89] found id: ""
	I1026 09:20:04.281123  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:04.281191  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.284980  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:04.285070  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:04.315436  445201 cri.go:89] found id: ""
	I1026 09:20:04.315464  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.315474  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:04.315480  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:04.315546  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:04.349347  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:04.349423  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:04.349443  445201 cri.go:89] found id: ""
	I1026 09:20:04.349467  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:04.349562  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.354196  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.358052  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:04.358166  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:04.389218  445201 cri.go:89] found id: ""
	I1026 09:20:04.389299  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.389321  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:04.389343  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:04.389437  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:04.416455  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:04.416492  445201 cri.go:89] found id: ""
	I1026 09:20:04.416501  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:04.416559  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.420377  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:04.420476  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:04.447634  445201 cri.go:89] found id: ""
	I1026 09:20:04.447657  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.447666  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:04.447673  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:04.447730  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:04.474156  445201 cri.go:89] found id: ""
	I1026 09:20:04.474181  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.474190  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:04.474202  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:04.474214  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:04.524678  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:04.524712  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:04.556118  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:04.556147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:04.582653  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:04.582683  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:04.650076  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:04.650150  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:04.650176  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:04.740766  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:04.740805  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:04.813196  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:04.813231  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:04.901634  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:04.901670  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:04.932029  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:04.932061  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:05.133407  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:05.133449  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:07.655144  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:07.666212  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:07.666284  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:07.692697  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:07.692721  445201 cri.go:89] found id: ""
	I1026 09:20:07.692728  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:07.692783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.696542  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:07.696615  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:07.724049  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:07.724073  445201 cri.go:89] found id: ""
	I1026 09:20:07.724082  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:07.724141  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.727799  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:07.727875  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:07.753315  445201 cri.go:89] found id: ""
	I1026 09:20:07.753387  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.753410  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:07.753432  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:07.753546  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:07.784900  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:07.784930  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:07.784936  445201 cri.go:89] found id: ""
	I1026 09:20:07.784944  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:07.785019  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.789772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.793763  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:07.793838  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:07.824377  445201 cri.go:89] found id: ""
	I1026 09:20:07.824403  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.824412  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:07.824418  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:07.824477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:07.852049  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:07.852071  445201 cri.go:89] found id: ""
	I1026 09:20:07.852079  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:07.852135  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.855969  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:07.856048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:07.883689  445201 cri.go:89] found id: ""
	I1026 09:20:07.883764  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.883787  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:07.883810  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:07.883898  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:07.911108  445201 cri.go:89] found id: ""
	I1026 09:20:07.911142  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.911151  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:07.911182  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:07.911203  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:07.927536  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:07.927563  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:07.955598  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:07.955626  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:08.043127  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:08.043175  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:08.239554  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:08.239595  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:08.313871  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:08.313891  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:08.313906  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:08.409288  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:08.409330  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:08.445453  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:08.445490  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:08.542001  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:08.542041  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:08.575896  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:08.575925  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:11.110016  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:11.123039  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:11.123119  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:11.156442  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:11.156519  445201 cri.go:89] found id: ""
	I1026 09:20:11.156536  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:11.156610  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.161156  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:11.161238  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:11.190856  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:11.190877  445201 cri.go:89] found id: ""
	I1026 09:20:11.190889  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:11.190947  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.194890  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:11.194965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:11.222317  445201 cri.go:89] found id: ""
	I1026 09:20:11.222401  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.222425  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:11.222448  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:11.222561  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:11.249996  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:11.250018  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:11.250023  445201 cri.go:89] found id: ""
	I1026 09:20:11.250030  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:11.250085  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.254563  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.258149  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:11.258249  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:11.285573  445201 cri.go:89] found id: ""
	I1026 09:20:11.285600  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.285609  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:11.285615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:11.285704  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:11.314585  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:11.314607  445201 cri.go:89] found id: ""
	I1026 09:20:11.314616  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:11.314692  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.318565  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:11.318667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:11.344845  445201 cri.go:89] found id: ""
	I1026 09:20:11.344875  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.344886  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:11.344892  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:11.344951  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:11.370675  445201 cri.go:89] found id: ""
	I1026 09:20:11.370699  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.370707  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:11.370751  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:11.370766  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:11.463898  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:11.463935  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:11.548152  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:11.548191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:11.591820  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:11.591850  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:11.783333  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:11.783374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:11.856907  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:11.856973  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:11.856994  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:11.903575  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:11.903607  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:11.981815  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:11.981852  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:12.011424  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:12.011457  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:12.041713  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:12.041742  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:14.559373  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:14.570544  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:14.570614  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:14.598242  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:14.598265  445201 cri.go:89] found id: ""
	I1026 09:20:14.598273  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:14.598328  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.602127  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:14.602212  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:14.629914  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:14.629941  445201 cri.go:89] found id: ""
	I1026 09:20:14.629950  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:14.630006  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.633949  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:14.634024  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:14.672189  445201 cri.go:89] found id: ""
	I1026 09:20:14.672215  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.672223  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:14.672229  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:14.672313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:14.698463  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:14.698487  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:14.698491  445201 cri.go:89] found id: ""
	I1026 09:20:14.698499  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:14.698554  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.702705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.706624  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:14.706810  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:14.734428  445201 cri.go:89] found id: ""
	I1026 09:20:14.734453  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.734462  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:14.734468  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:14.734525  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:14.761855  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:14.761878  445201 cri.go:89] found id: ""
	I1026 09:20:14.761887  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:14.761942  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.765711  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:14.765788  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:14.792195  445201 cri.go:89] found id: ""
	I1026 09:20:14.792230  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.792239  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:14.792245  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:14.792312  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:14.818583  445201 cri.go:89] found id: ""
	I1026 09:20:14.818610  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.818619  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:14.818634  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:14.818646  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:14.897266  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:14.897302  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:14.979061  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:14.979102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:14.997088  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:14.997117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:15.090410  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:15.090433  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:15.090451  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:15.182335  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:15.182377  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:15.219240  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:15.219276  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:15.249714  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:15.249741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:15.280669  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:15.280700  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:15.326700  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:15.326748  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:18.018858  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:18.030254  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:18.030329  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:18.059128  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:18.059156  445201 cri.go:89] found id: ""
	I1026 09:20:18.059165  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:18.059233  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.063347  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:18.063426  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:18.094571  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:18.094597  445201 cri.go:89] found id: ""
	I1026 09:20:18.094615  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:18.094670  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.098487  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:18.098566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:18.125039  445201 cri.go:89] found id: ""
	I1026 09:20:18.125111  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.125135  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:18.125147  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:18.125223  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:18.151735  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:18.151756  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:18.151761  445201 cri.go:89] found id: ""
	I1026 09:20:18.151769  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:18.151830  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.155811  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.159470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:18.159610  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:18.184732  445201 cri.go:89] found id: ""
	I1026 09:20:18.184798  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.184821  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:18.184846  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:18.184911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:18.211783  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:18.211804  445201 cri.go:89] found id: ""
	I1026 09:20:18.211813  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:18.211870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.215473  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:18.215597  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:18.242266  445201 cri.go:89] found id: ""
	I1026 09:20:18.242293  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.242308  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:18.242345  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:18.242427  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:18.268622  445201 cri.go:89] found id: ""
	I1026 09:20:18.268645  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.268654  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:18.268687  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:18.268708  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:18.352828  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:18.352866  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:18.400611  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:18.400642  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:18.492244  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:18.492284  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:18.546983  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:18.547061  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:18.641528  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:18.641568  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:18.857153  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:18.857195  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:18.874796  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:18.874829  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:18.941360  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:18.941387  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:18.941401  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:18.969642  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:18.969672  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.498770  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:21.512054  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:21.512166  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:21.539248  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:21.539270  445201 cri.go:89] found id: ""
	I1026 09:20:21.539278  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:21.539351  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.543915  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:21.543986  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:21.572592  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:21.572659  445201 cri.go:89] found id: ""
	I1026 09:20:21.572681  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:21.572775  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.576884  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:21.577007  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:21.609371  445201 cri.go:89] found id: ""
	I1026 09:20:21.609417  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.609427  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:21.609434  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:21.609511  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:21.636297  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:21.636322  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:21.636328  445201 cri.go:89] found id: ""
	I1026 09:20:21.636335  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:21.636391  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.640143  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.643865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:21.643965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:21.669821  445201 cri.go:89] found id: ""
	I1026 09:20:21.669863  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.669873  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:21.669879  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:21.669989  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:21.698658  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.698682  445201 cri.go:89] found id: ""
	I1026 09:20:21.698691  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:21.698778  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.703205  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:21.703276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:21.729633  445201 cri.go:89] found id: ""
	I1026 09:20:21.729657  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.729666  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:21.729672  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:21.729728  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:21.756700  445201 cri.go:89] found id: ""
	I1026 09:20:21.756724  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.756733  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:21.756748  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:21.756760  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:21.828451  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:21.828488  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:21.859441  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:21.859471  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.886181  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:21.886209  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:21.917168  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:21.917198  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:22.111064  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:22.111108  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:22.130247  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:22.130274  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:22.201415  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:22.201433  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:22.201445  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:22.295598  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:22.295635  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:22.335068  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:22.335102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:24.923329  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:24.934499  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:24.934566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:24.965902  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:24.965921  445201 cri.go:89] found id: ""
	I1026 09:20:24.965930  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:24.965995  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:24.969939  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:24.970014  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:24.998458  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:24.998483  445201 cri.go:89] found id: ""
	I1026 09:20:24.998491  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:24.998566  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.012507  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:25.012667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:25.051112  445201 cri.go:89] found id: ""
	I1026 09:20:25.051141  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.051151  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:25.051158  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:25.051275  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:25.086581  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:25.086663  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:25.086684  445201 cri.go:89] found id: ""
	I1026 09:20:25.086707  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:25.086829  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.091518  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.096321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:25.096464  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:25.131206  445201 cri.go:89] found id: ""
	I1026 09:20:25.131242  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.131251  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:25.131258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:25.131367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:25.163147  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:25.163180  445201 cri.go:89] found id: ""
	I1026 09:20:25.163189  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:25.163257  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.168126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:25.168263  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:25.197318  445201 cri.go:89] found id: ""
	I1026 09:20:25.197345  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.197354  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:25.197360  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:25.197459  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:25.225627  445201 cri.go:89] found id: ""
	I1026 09:20:25.225700  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.225730  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:25.225765  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:25.225792  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:25.242503  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:25.242589  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:25.289744  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:25.289777  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:25.363241  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:25.363281  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:25.399205  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:25.399239  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:25.486729  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:25.486765  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:25.529849  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:25.529881  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:25.733698  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:25.733738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:25.811490  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:25.811518  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:25.811532  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:25.904834  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:25.904871  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:28.433045  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:28.444334  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:28.444403  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:28.474756  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:28.474776  445201 cri.go:89] found id: ""
	I1026 09:20:28.474784  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:28.474838  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.478560  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:28.478631  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:28.515060  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:28.515081  445201 cri.go:89] found id: ""
	I1026 09:20:28.515090  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:28.515145  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.518916  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:28.519004  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:28.559110  445201 cri.go:89] found id: ""
	I1026 09:20:28.559132  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.559140  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:28.559146  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:28.559204  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:28.586836  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:28.586915  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:28.586936  445201 cri.go:89] found id: ""
	I1026 09:20:28.586949  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:28.587008  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.590860  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.594865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:28.594933  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:28.620390  445201 cri.go:89] found id: ""
	I1026 09:20:28.620417  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.620426  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:28.620433  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:28.620543  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:28.652047  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:28.652069  445201 cri.go:89] found id: ""
	I1026 09:20:28.652077  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:28.652134  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.655864  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:28.655969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:28.681847  445201 cri.go:89] found id: ""
	I1026 09:20:28.681871  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.681880  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:28.681886  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:28.681991  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:28.708971  445201 cri.go:89] found id: ""
	I1026 09:20:28.708997  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.709007  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:28.709056  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:28.709074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:28.904970  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:28.905007  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:28.990331  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:28.990369  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:29.021189  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:29.021218  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:29.039055  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:29.039085  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:29.109298  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:29.109321  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:29.109334  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:29.144937  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:29.144971  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:29.219064  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:29.219102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:29.247935  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:29.248003  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:29.331292  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:29.331370  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:31.863132  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:31.874377  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:31.874449  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:31.901077  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:31.901097  445201 cri.go:89] found id: ""
	I1026 09:20:31.901106  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:31.901160  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.905381  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:31.905450  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:31.930663  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:31.930682  445201 cri.go:89] found id: ""
	I1026 09:20:31.930690  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:31.930770  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.934318  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:31.934429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:31.961825  445201 cri.go:89] found id: ""
	I1026 09:20:31.961848  445201 logs.go:282] 0 containers: []
	W1026 09:20:31.961857  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:31.961863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:31.961925  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:31.988820  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:31.988893  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:31.988905  445201 cri.go:89] found id: ""
	I1026 09:20:31.988912  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:31.988980  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.992783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.996548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:31.996660  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:32.027172  445201 cri.go:89] found id: ""
	I1026 09:20:32.027250  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.027273  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:32.027285  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:32.027359  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:32.059677  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:32.059698  445201 cri.go:89] found id: ""
	I1026 09:20:32.059706  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:32.059760  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:32.063552  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:32.063625  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:32.093636  445201 cri.go:89] found id: ""
	I1026 09:20:32.093706  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.093729  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:32.093751  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:32.093848  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:32.121601  445201 cri.go:89] found id: ""
	I1026 09:20:32.121669  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.121692  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:32.121722  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:32.121760  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:32.138320  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:32.138405  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:32.205854  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:32.205920  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:32.205945  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:32.241239  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:32.241333  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:32.327770  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:32.327807  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:32.363622  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:32.363652  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:32.404634  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:32.404662  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:32.437988  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:32.438017  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:32.654603  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:32.654674  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:32.764869  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:32.764909  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:35.352135  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:35.363548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:35.363620  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:35.392085  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:35.392111  445201 cri.go:89] found id: ""
	I1026 09:20:35.392120  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:35.392178  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.396199  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:35.396276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:35.424720  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:35.424744  445201 cri.go:89] found id: ""
	I1026 09:20:35.424753  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:35.424810  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.428788  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:35.428888  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:35.456996  445201 cri.go:89] found id: ""
	I1026 09:20:35.457024  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.457033  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:35.457040  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:35.457147  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:35.484260  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:35.484285  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:35.484290  445201 cri.go:89] found id: ""
	I1026 09:20:35.484298  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:35.484377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.488377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.492137  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:35.492223  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:35.530290  445201 cri.go:89] found id: ""
	I1026 09:20:35.530318  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.530327  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:35.530333  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:35.530395  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:35.560296  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:35.560320  445201 cri.go:89] found id: ""
	I1026 09:20:35.560328  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:35.560383  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.564258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:35.564335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:35.597504  445201 cri.go:89] found id: ""
	I1026 09:20:35.597532  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.597551  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:35.597558  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:35.597620  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:35.627116  445201 cri.go:89] found id: ""
	I1026 09:20:35.627143  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.627152  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:35.627167  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:35.627179  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:35.644287  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:35.644317  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:35.733109  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:35.733149  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:35.808772  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:35.808809  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:35.841387  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:35.841415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:35.873263  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:35.873292  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:36.061479  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:36.061520  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:36.135658  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:36.135696  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:36.135726  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:36.186430  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:36.186609  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:36.214658  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:36.214688  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:38.795192  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:38.806691  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:38.806787  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:38.834339  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:38.834360  445201 cri.go:89] found id: ""
	I1026 09:20:38.834369  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:38.834426  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.838346  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:38.838430  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:38.866618  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:38.866638  445201 cri.go:89] found id: ""
	I1026 09:20:38.866646  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:38.866705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.870616  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:38.870689  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:38.898034  445201 cri.go:89] found id: ""
	I1026 09:20:38.898059  445201 logs.go:282] 0 containers: []
	W1026 09:20:38.898068  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:38.898075  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:38.898133  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:38.926259  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:38.926281  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:38.926286  445201 cri.go:89] found id: ""
	I1026 09:20:38.926295  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:38.926349  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.930561  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.934502  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:38.934577  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:38.961668  445201 cri.go:89] found id: ""
	I1026 09:20:38.961695  445201 logs.go:282] 0 containers: []
	W1026 09:20:38.961704  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:38.961711  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:38.961782  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:38.989873  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:38.989898  445201 cri.go:89] found id: ""
	I1026 09:20:38.989907  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:38.989968  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.993821  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:38.993894  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:39.021827  445201 cri.go:89] found id: ""
	I1026 09:20:39.021860  445201 logs.go:282] 0 containers: []
	W1026 09:20:39.021873  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:39.021881  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:39.021959  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:39.047487  445201 cri.go:89] found id: ""
	I1026 09:20:39.047510  445201 logs.go:282] 0 containers: []
	W1026 09:20:39.047518  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:39.047533  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:39.047544  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:39.235619  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:39.235656  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:39.313109  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:39.313130  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:39.313143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:39.394391  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:39.394430  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:39.482870  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:39.482920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:39.526396  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:39.526486  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:39.543665  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:39.543693  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:39.648571  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:39.648615  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:39.685280  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:39.685320  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:39.723214  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:39.723246  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.254876  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:42.267740  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:42.267847  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:42.297640  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:42.297709  445201 cri.go:89] found id: ""
	I1026 09:20:42.297733  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:42.297795  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.301950  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:42.302025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:42.332906  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:42.332979  445201 cri.go:89] found id: ""
	I1026 09:20:42.333002  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:42.333094  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.337360  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:42.337477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:42.369545  445201 cri.go:89] found id: ""
	I1026 09:20:42.369571  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.369580  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:42.369586  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:42.369643  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:42.400994  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:42.401018  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:42.401023  445201 cri.go:89] found id: ""
	I1026 09:20:42.401030  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:42.401085  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.405385  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.408969  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:42.409057  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:42.437013  445201 cri.go:89] found id: ""
	I1026 09:20:42.437078  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.437101  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:42.437125  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:42.437201  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:42.469295  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.469316  445201 cri.go:89] found id: ""
	I1026 09:20:42.469324  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:42.469400  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.473224  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:42.473298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:42.509254  445201 cri.go:89] found id: ""
	I1026 09:20:42.509280  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.509289  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:42.509295  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:42.509352  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:42.536687  445201 cri.go:89] found id: ""
	I1026 09:20:42.536710  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.536720  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:42.536733  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:42.536743  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:42.567830  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:42.567857  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:42.657078  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:42.657117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:42.738369  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:42.738407  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:42.765459  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:42.765488  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.793003  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:42.793032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:42.874464  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:42.874500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:43.074563  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:43.074602  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:43.091834  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:43.091868  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:43.168463  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:43.168488  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:43.168529  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:45.716953  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:45.727823  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:45.727901  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:45.754838  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:45.754862  445201 cri.go:89] found id: ""
	I1026 09:20:45.754871  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:45.754935  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.758953  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:45.759048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:45.786578  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:45.786611  445201 cri.go:89] found id: ""
	I1026 09:20:45.786620  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:45.786677  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.790410  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:45.790484  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:45.817100  445201 cri.go:89] found id: ""
	I1026 09:20:45.817125  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.817134  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:45.817140  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:45.817195  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:45.844261  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:45.844284  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:45.844288  445201 cri.go:89] found id: ""
	I1026 09:20:45.844296  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:45.844352  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.848186  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.851653  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:45.851724  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:45.877419  445201 cri.go:89] found id: ""
	I1026 09:20:45.877444  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.877453  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:45.877459  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:45.877563  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:45.903685  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:45.903749  445201 cri.go:89] found id: ""
	I1026 09:20:45.903770  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:45.903841  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.907749  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:45.907835  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:45.935133  445201 cri.go:89] found id: ""
	I1026 09:20:45.935199  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.935220  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:45.935235  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:45.935313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:45.963149  445201 cri.go:89] found id: ""
	I1026 09:20:45.963222  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.963244  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:45.963276  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:45.963312  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:46.154397  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:46.154434  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:46.173037  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:46.173076  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:46.248971  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:46.249042  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:46.249068  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:46.289852  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:46.289882  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:46.316390  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:46.316425  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:46.401774  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:46.401823  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:46.440602  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:46.440633  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:46.536038  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:46.536075  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:46.611345  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:46.611389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.149727  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:49.161338  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:49.161429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:49.189053  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:49.189077  445201 cri.go:89] found id: ""
	I1026 09:20:49.189085  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:49.189156  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.193041  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:49.193123  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:49.222571  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:49.222596  445201 cri.go:89] found id: ""
	I1026 09:20:49.222605  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:49.222663  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.226532  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:49.226607  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:49.258470  445201 cri.go:89] found id: ""
	I1026 09:20:49.258495  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.258503  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:49.258509  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:49.258565  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:49.285929  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:49.285953  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.285958  445201 cri.go:89] found id: ""
	I1026 09:20:49.285966  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:49.286021  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.289772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.293274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:49.293397  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:49.320933  445201 cri.go:89] found id: ""
	I1026 09:20:49.320957  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.320966  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:49.320985  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:49.321048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:49.347746  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:49.347770  445201 cri.go:89] found id: ""
	I1026 09:20:49.347784  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:49.347843  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.351610  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:49.351682  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:49.384317  445201 cri.go:89] found id: ""
	I1026 09:20:49.384343  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.384352  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:49.384358  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:49.384417  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:49.410794  445201 cri.go:89] found id: ""
	I1026 09:20:49.410819  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.410828  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:49.410843  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:49.410855  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:49.484572  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:49.484598  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:49.484611  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:49.578246  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:49.578283  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:49.617058  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:49.617098  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:49.708347  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:49.708386  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.737503  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:49.737539  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:49.773301  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:49.773335  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:49.858035  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:49.858075  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:50.062201  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:50.062240  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:50.079457  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:50.079491  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:52.620387  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:52.631696  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:52.631768  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:52.662257  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:52.662280  445201 cri.go:89] found id: ""
	I1026 09:20:52.662288  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:52.662341  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.666304  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:52.666376  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:52.692937  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:52.693003  445201 cri.go:89] found id: ""
	I1026 09:20:52.693025  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:52.693111  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.696900  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:52.696968  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:52.722816  445201 cri.go:89] found id: ""
	I1026 09:20:52.722849  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.722859  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:52.722865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:52.722919  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:52.749987  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:52.750011  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:52.750017  445201 cri.go:89] found id: ""
	I1026 09:20:52.750024  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:52.750078  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.753715  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.757282  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:52.757350  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:52.786064  445201 cri.go:89] found id: ""
	I1026 09:20:52.786143  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.786167  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:52.786192  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:52.786286  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:52.813518  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:52.813544  445201 cri.go:89] found id: ""
	I1026 09:20:52.813553  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:52.813610  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.817548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:52.817623  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:52.851278  445201 cri.go:89] found id: ""
	I1026 09:20:52.851307  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.851315  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:52.851322  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:52.851382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:52.876624  445201 cri.go:89] found id: ""
	I1026 09:20:52.876699  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.876722  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:52.876743  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:52.876769  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:52.911672  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:52.911705  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:53.004812  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:53.004858  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:53.204975  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:53.205013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:53.293863  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:53.293898  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:53.384132  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:53.384173  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:53.415787  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:53.415819  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:53.445484  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:53.445517  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:53.490785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:53.490818  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:53.508703  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:53.508789  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:53.581038  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:56.082162  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:56.093503  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:56.093607  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:56.121809  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:56.121828  445201 cri.go:89] found id: ""
	I1026 09:20:56.121836  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:56.121913  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.125763  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:56.125865  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:56.151639  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:56.151664  445201 cri.go:89] found id: ""
	I1026 09:20:56.151673  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:56.151798  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.156294  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:56.156429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:56.186858  445201 cri.go:89] found id: ""
	I1026 09:20:56.186885  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.186894  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:56.186900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:56.186980  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:56.212611  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:56.212635  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:56.212640  445201 cri.go:89] found id: ""
	I1026 09:20:56.212647  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:56.212705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.216976  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.220594  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:56.220669  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:56.250514  445201 cri.go:89] found id: ""
	I1026 09:20:56.250590  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.250614  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:56.250636  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:56.250762  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:56.278227  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:56.278302  445201 cri.go:89] found id: ""
	I1026 09:20:56.278326  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:56.278413  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.282569  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:56.282705  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:56.309251  445201 cri.go:89] found id: ""
	I1026 09:20:56.309327  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.309350  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:56.309373  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:56.309465  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:56.336184  445201 cri.go:89] found id: ""
	I1026 09:20:56.336254  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.336275  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:56.336307  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:56.336344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:56.421731  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:56.421769  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:56.452283  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:56.452357  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:56.657837  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:56.657882  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:56.675344  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:56.675374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:56.751910  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:56.751931  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:56.751943  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:56.839332  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:56.839375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:56.875572  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:56.875605  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:56.952770  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:56.952810  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:56.983110  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:56.983141  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:59.513559  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:59.526258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:59.526329  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:59.552836  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:59.552860  445201 cri.go:89] found id: ""
	I1026 09:20:59.552869  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:59.552925  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.556827  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:59.556906  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:59.591057  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:59.591077  445201 cri.go:89] found id: ""
	I1026 09:20:59.591085  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:59.591141  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.595063  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:59.595138  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:59.622673  445201 cri.go:89] found id: ""
	I1026 09:20:59.622701  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.622736  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:59.622745  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:59.622801  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:59.651306  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:59.651330  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:59.651336  445201 cri.go:89] found id: ""
	I1026 09:20:59.651355  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:59.651432  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.655121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.658855  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:59.658960  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:59.685400  445201 cri.go:89] found id: ""
	I1026 09:20:59.685426  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.685436  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:59.685442  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:59.685498  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:59.712619  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:59.712643  445201 cri.go:89] found id: ""
	I1026 09:20:59.712651  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:59.712710  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.716925  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:59.717025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:59.752250  445201 cri.go:89] found id: ""
	I1026 09:20:59.752274  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.752283  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:59.752289  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:59.752358  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:59.778971  445201 cri.go:89] found id: ""
	I1026 09:20:59.778994  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.779004  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:59.779019  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:59.779033  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:59.972175  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:59.972210  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:59.988939  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:59.988971  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:00.269861  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:00.269899  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:00.328255  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:00.328296  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:00.451120  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:00.451169  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:00.488800  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:00.488955  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:00.592074  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:00.592097  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:00.592112  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:00.623110  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:00.623147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:00.707786  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:00.707828  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:03.247930  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:03.261785  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:03.261879  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:03.291325  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:03.291349  445201 cri.go:89] found id: ""
	I1026 09:21:03.291358  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:03.291416  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.295568  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:03.295641  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:03.323403  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:03.323423  445201 cri.go:89] found id: ""
	I1026 09:21:03.323432  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:03.323489  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.327333  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:03.327406  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:03.354897  445201 cri.go:89] found id: ""
	I1026 09:21:03.354921  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.354935  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:03.354942  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:03.355003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:03.387726  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:03.387803  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:03.387823  445201 cri.go:89] found id: ""
	I1026 09:21:03.387847  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:03.387920  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.392298  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.396190  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:03.396308  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:03.424850  445201 cri.go:89] found id: ""
	I1026 09:21:03.424875  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.424884  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:03.424890  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:03.424969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:03.453335  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:03.453360  445201 cri.go:89] found id: ""
	I1026 09:21:03.453369  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:03.453472  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.457581  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:03.457675  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:03.486882  445201 cri.go:89] found id: ""
	I1026 09:21:03.486954  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.486977  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:03.486999  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:03.487104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:03.529823  445201 cri.go:89] found id: ""
	I1026 09:21:03.529848  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.529858  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:03.529893  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:03.529922  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:03.740698  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:03.740737  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:03.763204  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:03.763234  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:03.837225  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:03.837243  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:03.837256  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:03.926755  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:03.926793  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:03.956255  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:03.956282  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:03.983359  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:03.983392  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:04.018225  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:04.018254  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:04.062731  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:04.062761  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:04.143784  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:04.143824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:06.728492  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:06.740045  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:06.740162  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:06.767428  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:06.767450  445201 cri.go:89] found id: ""
	I1026 09:21:06.767458  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:06.767515  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.771160  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:06.771294  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:06.798465  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:06.798489  445201 cri.go:89] found id: ""
	I1026 09:21:06.798498  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:06.798574  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.803242  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:06.803327  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:06.829872  445201 cri.go:89] found id: ""
	I1026 09:21:06.829900  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.829909  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:06.829915  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:06.829978  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:06.862567  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:06.862598  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:06.862607  445201 cri.go:89] found id: ""
	I1026 09:21:06.862619  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:06.862686  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.866827  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.870490  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:06.870561  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:06.898277  445201 cri.go:89] found id: ""
	I1026 09:21:06.898306  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.898314  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:06.898321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:06.898379  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:06.925628  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:06.925653  445201 cri.go:89] found id: ""
	I1026 09:21:06.925661  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:06.925717  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.929528  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:06.929597  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:06.960631  445201 cri.go:89] found id: ""
	I1026 09:21:06.960697  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.960719  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:06.960733  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:06.960809  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:06.990323  445201 cri.go:89] found id: ""
	I1026 09:21:06.990349  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.990358  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:06.990390  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:06.990410  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:07.184993  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:07.185029  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:07.201638  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:07.201678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:07.290033  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:07.290071  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:07.331739  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:07.331775  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:07.410118  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:07.410155  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:07.444128  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:07.444158  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:07.530207  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:07.530274  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:07.530301  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:07.563028  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:07.563063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:07.591338  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:07.591364  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:10.178821  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:10.190222  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:10.190298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:10.227875  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:10.227897  445201 cri.go:89] found id: ""
	I1026 09:21:10.227906  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:10.227964  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.231746  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:10.231821  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:10.260172  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:10.260192  445201 cri.go:89] found id: ""
	I1026 09:21:10.260200  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:10.260270  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.264202  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:10.264276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:10.291339  445201 cri.go:89] found id: ""
	I1026 09:21:10.291367  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.291377  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:10.291383  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:10.291441  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:10.318497  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:10.318520  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:10.318525  445201 cri.go:89] found id: ""
	I1026 09:21:10.318532  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:10.318590  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.322446  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.326370  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:10.326464  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:10.354158  445201 cri.go:89] found id: ""
	I1026 09:21:10.354181  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.354191  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:10.354197  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:10.354254  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:10.389289  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:10.389313  445201 cri.go:89] found id: ""
	I1026 09:21:10.389321  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:10.389373  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.393257  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:10.393338  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:10.420691  445201 cri.go:89] found id: ""
	I1026 09:21:10.420725  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.420733  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:10.420770  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:10.420851  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:10.447235  445201 cri.go:89] found id: ""
	I1026 09:21:10.447258  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.447267  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:10.447300  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:10.447316  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:10.463955  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:10.463983  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:10.547364  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:10.547386  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:10.547400  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:10.638440  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:10.638477  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:10.667342  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:10.667375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:10.880045  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:10.880083  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:10.917355  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:10.917389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:10.994430  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:10.994468  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:11.025132  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:11.025199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:11.106492  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:11.106530  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:13.646984  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:13.658294  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:13.658365  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:13.686138  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:13.686159  445201 cri.go:89] found id: ""
	I1026 09:21:13.686166  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:13.686221  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.689904  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:13.689975  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:13.717871  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:13.717900  445201 cri.go:89] found id: ""
	I1026 09:21:13.717909  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:13.717964  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.721862  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:13.721934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:13.748415  445201 cri.go:89] found id: ""
	I1026 09:21:13.748446  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.748454  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:13.748460  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:13.748521  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:13.782153  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:13.782172  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:13.782177  445201 cri.go:89] found id: ""
	I1026 09:21:13.782184  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:13.782242  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.786369  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.790770  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:13.790856  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:13.819605  445201 cri.go:89] found id: ""
	I1026 09:21:13.819633  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.819641  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:13.819647  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:13.819759  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:13.848568  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:13.848595  445201 cri.go:89] found id: ""
	I1026 09:21:13.848604  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:13.848722  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.852510  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:13.852595  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:13.883059  445201 cri.go:89] found id: ""
	I1026 09:21:13.883086  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.883096  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:13.883102  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:13.883159  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:13.909300  445201 cri.go:89] found id: ""
	I1026 09:21:13.909326  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.909341  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:13.909356  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:13.909372  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:14.101663  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:14.101702  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:14.148245  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:14.148275  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:14.236589  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:14.236626  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:14.268611  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:14.268642  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:14.298171  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:14.298199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:14.378289  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:14.378327  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:14.411068  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:14.411100  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:14.429706  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:14.429738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:14.511924  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:14.511945  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:14.511959  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.102246  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:17.114188  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:17.114253  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:17.142450  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.142471  445201 cri.go:89] found id: ""
	I1026 09:21:17.142479  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:17.142535  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.146640  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:17.146757  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:17.172790  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:17.172813  445201 cri.go:89] found id: ""
	I1026 09:21:17.172822  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:17.172879  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.176741  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:17.176813  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:17.204125  445201 cri.go:89] found id: ""
	I1026 09:21:17.204151  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.204160  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:17.204166  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:17.204227  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:17.230852  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:17.230875  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:17.230880  445201 cri.go:89] found id: ""
	I1026 09:21:17.230888  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:17.230943  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.234869  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.238552  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:17.238641  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:17.269934  445201 cri.go:89] found id: ""
	I1026 09:21:17.269960  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.269970  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:17.269977  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:17.270036  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:17.296616  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:17.296654  445201 cri.go:89] found id: ""
	I1026 09:21:17.296664  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:17.296735  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.300505  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:17.300579  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:17.327307  445201 cri.go:89] found id: ""
	I1026 09:21:17.327332  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.327340  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:17.327347  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:17.327409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:17.363904  445201 cri.go:89] found id: ""
	I1026 09:21:17.363930  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.363939  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:17.363952  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:17.363972  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:17.406846  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:17.406876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:17.609584  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:17.609620  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.698700  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:17.698741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:17.736430  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:17.736469  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:17.767180  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:17.767207  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:17.849128  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:17.849163  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:17.880031  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:17.880059  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:17.896355  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:17.896384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:17.970628  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:17.970690  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:17.970778  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:20.550647  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:20.561701  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:20.561785  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:20.590001  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:20.590020  445201 cri.go:89] found id: ""
	I1026 09:21:20.590028  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:20.590082  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.593813  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:20.593882  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:20.626195  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:20.626219  445201 cri.go:89] found id: ""
	I1026 09:21:20.626228  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:20.626282  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.630016  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:20.630089  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:20.660378  445201 cri.go:89] found id: ""
	I1026 09:21:20.660414  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.660423  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:20.660430  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:20.660509  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:20.687394  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:20.687416  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:20.687421  445201 cri.go:89] found id: ""
	I1026 09:21:20.687429  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:20.687484  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.691121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.694770  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:20.694839  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:20.721137  445201 cri.go:89] found id: ""
	I1026 09:21:20.721163  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.721172  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:20.721179  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:20.721240  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:20.747340  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:20.747360  445201 cri.go:89] found id: ""
	I1026 09:21:20.747368  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:20.747430  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.751174  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:20.751287  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:20.777016  445201 cri.go:89] found id: ""
	I1026 09:21:20.777043  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.777052  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:20.777059  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:20.777137  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:20.804106  445201 cri.go:89] found id: ""
	I1026 09:21:20.804130  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.804145  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:20.804159  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:20.804170  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:20.829056  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:20.829083  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:20.859258  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:20.859285  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:21.057262  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:21.057314  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:21.098469  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:21.098501  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:21.178683  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:21.178821  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:21.267215  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:21.267252  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:21.284825  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:21.284855  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:21.355190  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:21.355210  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:21.355222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:21.449675  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:21.449715  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:23.979650  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:23.990886  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:23.990994  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:24.022185  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:24.022205  445201 cri.go:89] found id: ""
	I1026 09:21:24.022212  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:24.022295  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.026431  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:24.026556  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:24.056183  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:24.056206  445201 cri.go:89] found id: ""
	I1026 09:21:24.056215  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:24.056290  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.060036  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:24.060114  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:24.089520  445201 cri.go:89] found id: ""
	I1026 09:21:24.089548  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.089558  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:24.089564  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:24.089622  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:24.119130  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:24.119151  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:24.119156  445201 cri.go:89] found id: ""
	I1026 09:21:24.119164  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:24.119220  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.122899  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.126615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:24.126690  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:24.154462  445201 cri.go:89] found id: ""
	I1026 09:21:24.154485  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.154502  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:24.154510  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:24.154569  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:24.182057  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:24.182123  445201 cri.go:89] found id: ""
	I1026 09:21:24.182145  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:24.182238  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.186279  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:24.186401  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:24.213464  445201 cri.go:89] found id: ""
	I1026 09:21:24.213530  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.213555  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:24.213577  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:24.213659  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:24.243102  445201 cri.go:89] found id: ""
	I1026 09:21:24.243129  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.243138  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:24.243153  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:24.243164  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:24.443606  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:24.443648  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:24.462087  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:24.462117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:24.538553  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:24.538574  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:24.538588  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:24.638803  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:24.638843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:24.685028  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:24.685062  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:24.782516  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:24.782556  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:24.812833  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:24.812902  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:24.844661  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:24.844689  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:24.874272  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:24.874302  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:27.459319  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:27.471232  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:27.471319  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:27.505082  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:27.505104  445201 cri.go:89] found id: ""
	I1026 09:21:27.505112  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:27.505205  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.511716  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:27.511848  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:27.539424  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:27.539495  445201 cri.go:89] found id: ""
	I1026 09:21:27.539518  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:27.539604  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.543429  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:27.543501  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:27.568844  445201 cri.go:89] found id: ""
	I1026 09:21:27.568869  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.568878  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:27.568884  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:27.568940  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:27.594428  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:27.594448  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:27.594453  445201 cri.go:89] found id: ""
	I1026 09:21:27.594461  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:27.594513  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.598153  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.601518  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:27.601583  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:27.630965  445201 cri.go:89] found id: ""
	I1026 09:21:27.630992  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.631001  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:27.631014  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:27.631070  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:27.657171  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:27.657192  445201 cri.go:89] found id: ""
	I1026 09:21:27.657201  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:27.657259  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.660934  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:27.661026  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:27.688105  445201 cri.go:89] found id: ""
	I1026 09:21:27.688130  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.688139  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:27.688145  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:27.688202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:27.714913  445201 cri.go:89] found id: ""
	I1026 09:21:27.714940  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.714948  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:27.714963  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:27.714977  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:27.742801  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:27.742830  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:27.773812  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:27.773840  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:27.855060  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:27.855097  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:27.890187  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:27.890217  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:28.098021  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:28.098058  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:28.186518  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:28.186596  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:28.264677  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:28.264714  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:28.282938  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:28.282969  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:28.348772  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:28.348792  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:28.348805  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:30.895244  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:30.910800  445201 out.go:203] 
	W1026 09:21:30.913672  445201 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1026 09:21:30.913714  445201 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1026 09:21:30.913725  445201 out.go:285] * Related issues:
	* Related issues:
	W1026 09:21:30.913740  445201 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1026 09:21:30.913752  445201 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1026 09:21:30.916625  445201 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-arm64 start -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 105
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-10-26 09:21:31.729637706 +0000 UTC m=+4103.123124772
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-275732
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-275732:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841",
	        "Created": "2025-10-26T09:12:31.598686881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:13:11.484770434Z",
	            "FinishedAt": "2025-10-26T09:13:10.243058396Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841/hostname",
	        "HostsPath": "/var/lib/docker/containers/318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841/hosts",
	        "LogPath": "/var/lib/docker/containers/318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841/318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841-json.log",
	        "Name": "/kubernetes-upgrade-275732",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-275732:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-275732",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "318d12f1e3c3bf7535c09b93810f47318e8bebc783c4d0306658d0e2eb4b6841",
	                "LowerDir": "/var/lib/docker/overlay2/88d6e876d34e8a6734d413a4e803bcf372d6198ef1031004f9b08da187bdcf4a-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88d6e876d34e8a6734d413a4e803bcf372d6198ef1031004f9b08da187bdcf4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88d6e876d34e8a6734d413a4e803bcf372d6198ef1031004f9b08da187bdcf4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88d6e876d34e8a6734d413a4e803bcf372d6198ef1031004f9b08da187bdcf4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-275732",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-275732/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-275732",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-275732",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-275732",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25d21a2596b980b93823d2331e8c76eda681afd24197071d0ef42470c428764e",
	            "SandboxKey": "/var/run/docker/netns/25d21a2596b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-275732": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:e1:36:a2:3f:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac858c856d7e0149ed9d40c5b70d1998a96dbb9018b09e8fd327443ab39239d8",
	                    "EndpointID": "0db8c03e06d1ac6e3618b3eb5b4bafd3759a0e9b8e23d80edb9d50d95d1f16fe",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-275732",
	                        "318d12f1e3c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-275732 -n kubernetes-upgrade-275732
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-275732 -n kubernetes-upgrade-275732: exit status 2 (373.72366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-275732 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-275732 logs -n 25: (1.211718865s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-796399 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status docker --all --full --no-pager                                      │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat docker --no-pager                                                      │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/docker/daemon.json                                                          │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo docker system info                                                                   │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cri-dockerd --version                                                                │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat containerd --no-pager                                                  │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/containerd/config.toml                                                      │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo containerd config dump                                                               │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status crio --all --full --no-pager                                        │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat crio --no-pager                                                        │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo crio config                                                                          │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ delete  │ -p cilium-796399                                                                                           │ cilium-796399            │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:17 UTC │
	│ start   │ -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-003748 │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:18 UTC │
	│ delete  │ -p force-systemd-env-003748                                                                                │ force-systemd-env-003748 │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:18 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-375355   │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:18:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:18:29.167181  468013 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:18:29.167301  468013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:18:29.167305  468013 out.go:374] Setting ErrFile to fd 2...
	I1026 09:18:29.167308  468013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:18:29.167584  468013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:18:29.167982  468013 out.go:368] Setting JSON to false
	I1026 09:18:29.168903  468013 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10860,"bootTime":1761459450,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:18:29.168962  468013 start.go:141] virtualization:  
	I1026 09:18:29.172737  468013 out.go:179] * [cert-expiration-375355] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:18:29.177433  468013 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:18:29.177488  468013 notify.go:220] Checking for updates...
	I1026 09:18:29.184791  468013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:18:29.188076  468013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:18:29.191289  468013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:18:29.194434  468013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:18:29.197488  468013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:18:29.201210  468013 config.go:182] Loaded profile config "kubernetes-upgrade-275732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:18:29.201301  468013 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:18:29.228314  468013 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:18:29.228414  468013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:18:29.305153  468013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:18:29.292173039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:18:29.305251  468013 docker.go:318] overlay module found
	I1026 09:18:29.308558  468013 out.go:179] * Using the docker driver based on user configuration
	I1026 09:18:29.311538  468013 start.go:305] selected driver: docker
	I1026 09:18:29.311548  468013 start.go:925] validating driver "docker" against <nil>
	I1026 09:18:29.311560  468013 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:18:29.312379  468013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:18:29.414429  468013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:18:29.404751199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:18:29.414568  468013 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:18:29.414822  468013 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 09:18:29.417782  468013 out.go:179] * Using Docker driver with root privileges
	I1026 09:18:29.420613  468013 cni.go:84] Creating CNI manager for ""
	I1026 09:18:29.420677  468013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:18:29.420684  468013 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:18:29.420762  468013 start.go:349] cluster config:
	{Name:cert-expiration-375355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-375355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:18:29.423850  468013 out.go:179] * Starting "cert-expiration-375355" primary control-plane node in "cert-expiration-375355" cluster
	I1026 09:18:29.426670  468013 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:18:29.429748  468013 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:18:29.432512  468013 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:18:29.432559  468013 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:18:29.432567  468013 cache.go:58] Caching tarball of preloaded images
	I1026 09:18:29.432649  468013 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:18:29.432659  468013 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:18:29.432804  468013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/config.json ...
	I1026 09:18:29.432820  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/config.json: {Name:mk99702d599da28f52093d6a5d46c4e082baf4b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:29.432972  468013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:18:29.454906  468013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:18:29.454917  468013 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:18:29.454929  468013 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:18:29.454950  468013 start.go:360] acquireMachinesLock for cert-expiration-375355: {Name:mkce12949a1a4849d22049f20630a884be46d3b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:18:29.455047  468013 start.go:364] duration metric: took 82.864µs to acquireMachinesLock for "cert-expiration-375355"
	I1026 09:18:29.455069  468013 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-375355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-375355 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:18:29.455134  468013 start.go:125] createHost starting for "" (driver="docker")
	I1026 09:18:27.420858  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:27.435020  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:27.435093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:27.462527  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:27.462551  445201 cri.go:89] found id: ""
	I1026 09:18:27.462559  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:27.462614  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.466287  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:27.466360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:27.493839  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:27.493861  445201 cri.go:89] found id: ""
	I1026 09:18:27.493870  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:27.493927  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.497623  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:27.497752  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:27.526904  445201 cri.go:89] found id: ""
	I1026 09:18:27.526927  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.526936  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:27.526942  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:27.527003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:27.557509  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:27.557530  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:27.557535  445201 cri.go:89] found id: ""
	I1026 09:18:27.557542  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:27.557608  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.561397  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.565255  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:27.565338  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:27.594176  445201 cri.go:89] found id: ""
	I1026 09:18:27.594202  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.594211  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:27.594217  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:27.594279  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:27.623222  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:27.623248  445201 cri.go:89] found id: ""
	I1026 09:18:27.623258  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:27.623316  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:27.627247  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:27.627330  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:27.653051  445201 cri.go:89] found id: ""
	I1026 09:18:27.653078  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.653086  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:27.653092  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:27.653150  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:27.679559  445201 cri.go:89] found id: ""
	I1026 09:18:27.679586  445201 logs.go:282] 0 containers: []
	W1026 09:18:27.679597  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:27.679610  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:27.679622  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:27.767257  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:27.767293  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:27.801763  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:27.801796  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:27.828368  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:27.828394  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:27.862093  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:27.862127  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:28.046689  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:28.046734  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:28.144179  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:28.144202  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:28.144215  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:28.210208  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:28.210291  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:28.241056  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:28.241082  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:28.333484  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:28.333563  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:29.458454  468013 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:18:29.458666  468013 start.go:159] libmachine.API.Create for "cert-expiration-375355" (driver="docker")
	I1026 09:18:29.458696  468013 client.go:168] LocalClient.Create starting
	I1026 09:18:29.458813  468013 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:18:29.458853  468013 main.go:141] libmachine: Decoding PEM data...
	I1026 09:18:29.458864  468013 main.go:141] libmachine: Parsing certificate...
	I1026 09:18:29.458916  468013 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:18:29.458931  468013 main.go:141] libmachine: Decoding PEM data...
	I1026 09:18:29.458939  468013 main.go:141] libmachine: Parsing certificate...
	I1026 09:18:29.459299  468013 cli_runner.go:164] Run: docker network inspect cert-expiration-375355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:18:29.480339  468013 cli_runner.go:211] docker network inspect cert-expiration-375355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:18:29.480421  468013 network_create.go:284] running [docker network inspect cert-expiration-375355] to gather additional debugging logs...
	I1026 09:18:29.480436  468013 cli_runner.go:164] Run: docker network inspect cert-expiration-375355
	W1026 09:18:29.496007  468013 cli_runner.go:211] docker network inspect cert-expiration-375355 returned with exit code 1
	I1026 09:18:29.496028  468013 network_create.go:287] error running [docker network inspect cert-expiration-375355]: docker network inspect cert-expiration-375355: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-375355 not found
	I1026 09:18:29.496040  468013 network_create.go:289] output of [docker network inspect cert-expiration-375355]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-375355 not found
	
	** /stderr **
	I1026 09:18:29.496145  468013 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:18:29.512718  468013 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:18:29.513059  468013 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:18:29.513290  468013 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:18:29.513524  468013 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ac858c856d7e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8a:a4:e0:2c:91:ef} reservation:<nil>}
	I1026 09:18:29.513928  468013 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a60a20}
	I1026 09:18:29.513943  468013 network_create.go:124] attempt to create docker network cert-expiration-375355 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 09:18:29.513999  468013 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-375355 cert-expiration-375355
	I1026 09:18:29.581952  468013 network_create.go:108] docker network cert-expiration-375355 192.168.85.0/24 created
	I1026 09:18:29.581974  468013 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-375355" container
	I1026 09:18:29.582060  468013 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:18:29.598344  468013 cli_runner.go:164] Run: docker volume create cert-expiration-375355 --label name.minikube.sigs.k8s.io=cert-expiration-375355 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:18:29.616381  468013 oci.go:103] Successfully created a docker volume cert-expiration-375355
	I1026 09:18:29.616464  468013 cli_runner.go:164] Run: docker run --rm --name cert-expiration-375355-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-375355 --entrypoint /usr/bin/test -v cert-expiration-375355:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:18:30.148728  468013 oci.go:107] Successfully prepared a docker volume cert-expiration-375355
	I1026 09:18:30.148771  468013 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:18:30.148791  468013 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:18:30.148868  468013 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-375355:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 09:18:30.853449  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:30.864358  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:30.864425  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:30.912189  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:30.912213  445201 cri.go:89] found id: ""
	I1026 09:18:30.912221  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:30.912274  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:30.916421  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:30.916490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:30.949024  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:30.949044  445201 cri.go:89] found id: ""
	I1026 09:18:30.949053  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:30.949106  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:30.953463  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:30.953531  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:30.980685  445201 cri.go:89] found id: ""
	I1026 09:18:30.980709  445201 logs.go:282] 0 containers: []
	W1026 09:18:30.980718  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:30.980724  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:30.980780  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:31.016133  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:31.016158  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:31.016173  445201 cri.go:89] found id: ""
	I1026 09:18:31.016181  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:31.016238  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.020939  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.025309  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:31.025386  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:31.060958  445201 cri.go:89] found id: ""
	I1026 09:18:31.060984  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.060993  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:31.060998  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:31.061055  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:31.101477  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:31.101502  445201 cri.go:89] found id: ""
	I1026 09:18:31.101510  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:31.101569  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:31.106479  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:31.106551  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:31.149271  445201 cri.go:89] found id: ""
	I1026 09:18:31.149299  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.149308  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:31.149314  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:31.149377  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:31.197370  445201 cri.go:89] found id: ""
	I1026 09:18:31.197396  445201 logs.go:282] 0 containers: []
	W1026 09:18:31.197404  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:31.197417  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:31.197429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:31.267814  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:31.267847  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:31.306800  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:31.306828  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:31.355411  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:31.355446  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:31.594236  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:31.594284  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:31.707540  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:31.707562  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:31.707575  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:31.749062  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:31.749139  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:31.782579  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:31.782608  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:31.875835  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:31.875914  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:31.893388  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:31.893468  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:34.494834  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:34.505863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:34.505934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:34.533409  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:34.533434  445201 cri.go:89] found id: ""
	I1026 09:18:34.533444  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:34.533506  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.537148  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:34.537273  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:34.563259  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:34.563329  445201 cri.go:89] found id: ""
	I1026 09:18:34.563362  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:34.563444  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.567511  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:34.567635  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:34.598008  445201 cri.go:89] found id: ""
	I1026 09:18:34.598033  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.598043  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:34.598049  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:34.598106  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:34.625931  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:34.625955  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:34.625960  445201 cri.go:89] found id: ""
	I1026 09:18:34.625967  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:34.626023  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.629701  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.633096  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:34.633218  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:34.659117  445201 cri.go:89] found id: ""
	I1026 09:18:34.659144  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.659153  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:34.659160  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:34.659249  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:34.688692  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:34.688717  445201 cri.go:89] found id: ""
	I1026 09:18:34.688725  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:34.688779  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:34.692931  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:34.693020  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:34.721387  445201 cri.go:89] found id: ""
	I1026 09:18:34.721410  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.721418  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:34.721430  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:34.721486  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:34.767730  445201 cri.go:89] found id: ""
	I1026 09:18:34.767751  445201 logs.go:282] 0 containers: []
	W1026 09:18:34.767760  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:34.767774  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:34.767787  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:34.806565  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:34.806596  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:35.067378  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:35.067476  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:35.094465  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:35.094551  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:35.215638  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:35.215715  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:35.316393  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:35.316482  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:35.371828  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:35.371854  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:34.730572  468013 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-375355:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.581666883s)
	I1026 09:18:34.730596  468013 kic.go:203] duration metric: took 4.581802072s to extract preloaded images to volume ...
	W1026 09:18:34.730745  468013 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:18:34.730851  468013 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:18:34.847203  468013 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-375355 --name cert-expiration-375355 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-375355 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-375355 --network cert-expiration-375355 --ip 192.168.85.2 --volume cert-expiration-375355:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:18:35.239016  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Running}}
	I1026 09:18:35.265110  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:18:35.314229  468013 cli_runner.go:164] Run: docker exec cert-expiration-375355 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:18:35.392070  468013 oci.go:144] the created container "cert-expiration-375355" has a running status.
	I1026 09:18:35.392100  468013 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa...
	I1026 09:18:35.791896  468013 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:18:35.818902  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:18:35.843744  468013 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:18:35.843756  468013 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-375355 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:18:35.917101  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:18:35.940142  468013 machine.go:93] provisionDockerMachine start ...
	I1026 09:18:35.940258  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:35.989947  468013 main.go:141] libmachine: Using SSH client type: native
	I1026 09:18:35.990271  468013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1026 09:18:35.990278  468013 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:18:35.991070  468013 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 09:18:39.155002  468013 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-375355
	
	I1026 09:18:39.155017  468013 ubuntu.go:182] provisioning hostname "cert-expiration-375355"
	I1026 09:18:39.155089  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:35.482470  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:35.482497  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:35.668838  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:35.668926  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:35.890598  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:35.890623  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:35.890636  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:38.480272  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:38.490814  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:38.490909  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:38.519259  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:38.519283  445201 cri.go:89] found id: ""
	I1026 09:18:38.519292  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:38.519346  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.530260  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:38.530335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:38.557988  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:38.558011  445201 cri.go:89] found id: ""
	I1026 09:18:38.558019  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:38.558072  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.561801  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:38.561902  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:38.588725  445201 cri.go:89] found id: ""
	I1026 09:18:38.588754  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.588763  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:38.588769  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:38.588828  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:38.615132  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:38.615155  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:38.615161  445201 cri.go:89] found id: ""
	I1026 09:18:38.615169  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:38.615233  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.618898  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.622588  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:38.622678  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:38.647919  445201 cri.go:89] found id: ""
	I1026 09:18:38.647945  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.647954  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:38.647960  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:38.648043  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:38.683904  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:38.683927  445201 cri.go:89] found id: ""
	I1026 09:18:38.683935  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:38.683991  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:38.687728  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:38.687814  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:38.713813  445201 cri.go:89] found id: ""
	I1026 09:18:38.713891  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.713914  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:38.713936  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:38.714028  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:38.744811  445201 cri.go:89] found id: ""
	I1026 09:18:38.744835  445201 logs.go:282] 0 containers: []
	W1026 09:18:38.744844  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:38.744859  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:38.744890  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:38.831387  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:38.831429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:38.858909  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:38.858939  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:38.889239  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:38.889270  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:39.072659  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:39.072738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:39.090892  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:39.091036  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:39.168978  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:39.169000  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:39.169013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:39.213394  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:39.213429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:39.303324  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:39.303405  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:39.334190  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:39.334221  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:39.187448  468013 main.go:141] libmachine: Using SSH client type: native
	I1026 09:18:39.187747  468013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1026 09:18:39.187756  468013 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-375355 && echo "cert-expiration-375355" | sudo tee /etc/hostname
	I1026 09:18:39.373765  468013 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-375355
	
	I1026 09:18:39.373837  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:39.410433  468013 main.go:141] libmachine: Using SSH client type: native
	I1026 09:18:39.410769  468013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1026 09:18:39.410784  468013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-375355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-375355/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-375355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:18:39.562903  468013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:18:39.562921  468013 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:18:39.562950  468013 ubuntu.go:190] setting up certificates
	I1026 09:18:39.562959  468013 provision.go:84] configureAuth start
	I1026 09:18:39.563026  468013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-375355
	I1026 09:18:39.580306  468013 provision.go:143] copyHostCerts
	I1026 09:18:39.580360  468013 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:18:39.580367  468013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:18:39.580462  468013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:18:39.580545  468013 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:18:39.580549  468013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:18:39.580573  468013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:18:39.580624  468013 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:18:39.580627  468013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:18:39.580648  468013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:18:39.580695  468013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-375355 san=[127.0.0.1 192.168.85.2 cert-expiration-375355 localhost minikube]
	I1026 09:18:40.376079  468013 provision.go:177] copyRemoteCerts
	I1026 09:18:40.376139  468013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:18:40.376180  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:40.401819  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:18:40.506495  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:18:40.523750  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 09:18:40.541102  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:18:40.558896  468013 provision.go:87] duration metric: took 995.915907ms to configureAuth
	I1026 09:18:40.558912  468013 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:18:40.559095  468013 config.go:182] Loaded profile config "cert-expiration-375355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:18:40.559204  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:40.576387  468013 main.go:141] libmachine: Using SSH client type: native
	I1026 09:18:40.576691  468013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1026 09:18:40.576703  468013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:18:40.826396  468013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:18:40.826409  468013 machine.go:96] duration metric: took 4.886255557s to provisionDockerMachine
	I1026 09:18:40.826418  468013 client.go:171] duration metric: took 11.367717081s to LocalClient.Create
	I1026 09:18:40.826435  468013 start.go:167] duration metric: took 11.367774132s to libmachine.API.Create "cert-expiration-375355"
	I1026 09:18:40.826452  468013 start.go:293] postStartSetup for "cert-expiration-375355" (driver="docker")
	I1026 09:18:40.826462  468013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:18:40.826543  468013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:18:40.826594  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:40.844458  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:18:40.950652  468013 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:18:40.953757  468013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:18:40.953783  468013 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:18:40.953792  468013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:18:40.953847  468013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:18:40.953925  468013 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:18:40.954031  468013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:18:40.961133  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:18:40.978242  468013 start.go:296] duration metric: took 151.77505ms for postStartSetup
	I1026 09:18:40.978589  468013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-375355
	I1026 09:18:40.994948  468013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/config.json ...
	I1026 09:18:40.995213  468013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:18:40.995252  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:41.012747  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:18:41.112000  468013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:18:41.116953  468013 start.go:128] duration metric: took 11.661805109s to createHost
	I1026 09:18:41.116968  468013 start.go:83] releasing machines lock for "cert-expiration-375355", held for 11.661914903s
	I1026 09:18:41.117039  468013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-375355
	I1026 09:18:41.134490  468013 ssh_runner.go:195] Run: cat /version.json
	I1026 09:18:41.134535  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:41.134875  468013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:18:41.134926  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:18:41.160728  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:18:41.162050  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:18:41.351019  468013 ssh_runner.go:195] Run: systemctl --version
	I1026 09:18:41.357294  468013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:18:41.400813  468013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:18:41.405166  468013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:18:41.405227  468013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:18:41.433505  468013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:18:41.433519  468013 start.go:495] detecting cgroup driver to use...
	I1026 09:18:41.433563  468013 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:18:41.433622  468013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:18:41.453386  468013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:18:41.465866  468013 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:18:41.465929  468013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:18:41.483804  468013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:18:41.501880  468013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:18:41.618153  468013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:18:41.740868  468013 docker.go:234] disabling docker service ...
	I1026 09:18:41.740931  468013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:18:41.761668  468013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:18:41.774350  468013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:18:41.897967  468013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:18:42.044405  468013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:18:42.061033  468013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:18:42.080593  468013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:18:42.080661  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.094681  468013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:18:42.094848  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.110289  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.136165  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.150620  468013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:18:42.161715  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.173581  468013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.197313  468013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:18:42.209229  468013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:18:42.224119  468013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:18:42.235072  468013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:18:42.389468  468013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:18:42.570836  468013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:18:42.570904  468013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:18:42.574913  468013 start.go:563] Will wait 60s for crictl version
	I1026 09:18:42.574967  468013 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.578477  468013 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:18:42.621530  468013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:18:42.621609  468013 ssh_runner.go:195] Run: crio --version
	I1026 09:18:42.669014  468013 ssh_runner.go:195] Run: crio --version
	I1026 09:18:42.711847  468013 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:18:42.714766  468013 cli_runner.go:164] Run: docker network inspect cert-expiration-375355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:18:42.740077  468013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:18:42.743972  468013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:18:42.752909  468013 kubeadm.go:883] updating cluster {Name:cert-expiration-375355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-375355 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:18:42.753021  468013 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:18:42.753072  468013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:18:42.793761  468013 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:18:42.793772  468013 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:18:42.793827  468013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:18:42.831390  468013 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:18:42.831403  468013 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:18:42.831409  468013 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:18:42.831491  468013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-375355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-375355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:18:42.831573  468013 ssh_runner.go:195] Run: crio config
	I1026 09:18:42.920298  468013 cni.go:84] Creating CNI manager for ""
	I1026 09:18:42.920318  468013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:18:42.920333  468013 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:18:42.920355  468013 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-375355 NodeName:cert-expiration-375355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:18:42.920569  468013 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-375355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:18:42.920646  468013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:18:42.928959  468013 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:18:42.929018  468013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:18:42.936884  468013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 09:18:42.950149  468013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:18:42.962377  468013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1026 09:18:42.975520  468013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:18:42.979153  468013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:18:42.988193  468013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:18:43.127467  468013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:18:43.142687  468013 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355 for IP: 192.168.85.2
	I1026 09:18:43.142698  468013 certs.go:195] generating shared ca certs ...
	I1026 09:18:43.142760  468013 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:43.142945  468013 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:18:43.142995  468013 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:18:43.143002  468013 certs.go:257] generating profile certs ...
	I1026 09:18:43.143070  468013 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.key
	I1026 09:18:43.143081  468013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.crt with IP's: []
	I1026 09:18:43.604815  468013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.crt ...
	I1026 09:18:43.604830  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.crt: {Name:mkbbdcdc48b76682d1a2a7570a6b57b1bf6fdff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:43.605036  468013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.key ...
	I1026 09:18:43.605044  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/client.key: {Name:mk84a46e1e32483c12858d09c083efd136b10096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:43.605142  468013 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key.65736a78
	I1026 09:18:43.605155  468013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt.65736a78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 09:18:41.936736  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:41.952312  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:41.952378  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:42.003484  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:42.003509  445201 cri.go:89] found id: ""
	I1026 09:18:42.003519  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:42.003593  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.009347  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:42.009506  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:42.041427  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:42.041447  445201 cri.go:89] found id: ""
	I1026 09:18:42.041455  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:42.041511  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.050442  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:42.050511  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:42.085278  445201 cri.go:89] found id: ""
	I1026 09:18:42.085302  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.085313  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:42.085321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:42.085390  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:42.135000  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:42.135039  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:42.135052  445201 cri.go:89] found id: ""
	I1026 09:18:42.135061  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:42.135142  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.143324  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.148842  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:42.148968  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:42.187227  445201 cri.go:89] found id: ""
	I1026 09:18:42.187260  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.187270  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:42.187277  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:42.187349  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:42.230590  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:42.230615  445201 cri.go:89] found id: ""
	I1026 09:18:42.230634  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:42.230696  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:42.237513  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:42.237587  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:42.273749  445201 cri.go:89] found id: ""
	I1026 09:18:42.273774  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.273783  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:42.273789  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:42.273853  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:42.317499  445201 cri.go:89] found id: ""
	I1026 09:18:42.317527  445201 logs.go:282] 0 containers: []
	W1026 09:18:42.317537  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:42.317550  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:42.317564  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:42.428824  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:42.428862  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:42.462647  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:42.462675  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:42.496373  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:42.496403  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:42.514247  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:42.514278  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:42.601636  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:42.601658  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:42.601675  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:42.652567  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:42.652602  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:42.728043  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:42.728151  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:42.835698  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:42.835733  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:42.890013  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:42.890087  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:44.444489  468013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt.65736a78 ...
	I1026 09:18:44.444504  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt.65736a78: {Name:mkd050c3886575f8a4020c3dc98dc74b788a4da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:44.444703  468013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key.65736a78 ...
	I1026 09:18:44.444713  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key.65736a78: {Name:mk1934165b037ba8a611528595ca486e5516c0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:44.444797  468013 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt.65736a78 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt
	I1026 09:18:44.444869  468013 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key.65736a78 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key
	I1026 09:18:44.444920  468013 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.key
	I1026 09:18:44.444931  468013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.crt with IP's: []
	I1026 09:18:45.459427  468013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.crt ...
	I1026 09:18:45.459446  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.crt: {Name:mk2de92027c4e770f8f5fa33e1d777164d0ef58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:45.459652  468013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.key ...
	I1026 09:18:45.459670  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.key: {Name:mkb42ee8984be0a9f1dc73a66a90ef09f26f286c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:18:45.459864  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:18:45.459901  468013 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:18:45.459909  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:18:45.459931  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:18:45.459952  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:18:45.459973  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:18:45.460015  468013 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:18:45.460674  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:18:45.480564  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:18:45.503762  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:18:45.525073  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:18:45.544820  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 09:18:45.568373  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:18:45.586166  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:18:45.603991  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/cert-expiration-375355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:18:45.621890  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:18:45.643230  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:18:45.665060  468013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:18:45.684621  468013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:18:45.705379  468013 ssh_runner.go:195] Run: openssl version
	I1026 09:18:45.712195  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:18:45.726401  468013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:18:45.730426  468013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:18:45.730483  468013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:18:45.776007  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:18:45.787421  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:18:45.802384  468013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:18:45.807264  468013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:18:45.807327  468013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:18:45.853517  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:18:45.865220  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:18:45.875699  468013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:18:45.880273  468013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:18:45.880328  468013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:18:45.925858  468013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:18:45.936897  468013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:18:45.941024  468013 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:18:45.941066  468013 kubeadm.go:400] StartCluster: {Name:cert-expiration-375355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-375355 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:18:45.941129  468013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:18:45.941193  468013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:18:45.978638  468013 cri.go:89] found id: ""
	I1026 09:18:45.978790  468013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:18:45.991016  468013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:18:46.000118  468013 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:18:46.000178  468013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:18:46.020402  468013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:18:46.020411  468013 kubeadm.go:157] found existing configuration files:
	
	I1026 09:18:46.020463  468013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:18:46.029489  468013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:18:46.029543  468013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:18:46.037219  468013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:18:46.046809  468013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:18:46.046886  468013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:18:46.061696  468013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:18:46.075506  468013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:18:46.075561  468013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:18:46.083443  468013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:18:46.093242  468013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:18:46.093305  468013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:18:46.102610  468013 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:18:46.171064  468013 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:18:46.171121  468013 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:18:46.223667  468013 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:18:46.223734  468013 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:18:46.223770  468013 kubeadm.go:318] OS: Linux
	I1026 09:18:46.223816  468013 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:18:46.223865  468013 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:18:46.223914  468013 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:18:46.223963  468013 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:18:46.224012  468013 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:18:46.224070  468013 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:18:46.224116  468013 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:18:46.224174  468013 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:18:46.224221  468013 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:18:46.344492  468013 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:18:46.344598  468013 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:18:46.344691  468013 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:18:46.355349  468013 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:18:46.361462  468013 out.go:252]   - Generating certificates and keys ...
	I1026 09:18:46.361564  468013 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:18:46.361633  468013 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:18:46.823292  468013 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:18:47.017063  468013 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:18:47.306136  468013 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:18:47.991754  468013 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:18:48.666694  468013 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:18:48.667090  468013 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-375355 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:18:48.899270  468013 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:18:48.899610  468013 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-375355 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:18:45.632211  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:45.645366  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:45.645433  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:45.681113  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:45.681140  445201 cri.go:89] found id: ""
	I1026 09:18:45.681148  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:45.681201  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.686769  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:45.686843  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:45.717975  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:45.717999  445201 cri.go:89] found id: ""
	I1026 09:18:45.718008  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:45.718067  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.721640  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:45.721709  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:45.756101  445201 cri.go:89] found id: ""
	I1026 09:18:45.756130  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.756140  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:45.756147  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:45.756205  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:45.787128  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:45.787153  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:45.787159  445201 cri.go:89] found id: ""
	I1026 09:18:45.787166  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:45.787221  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.791689  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.795274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:45.795360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:45.829769  445201 cri.go:89] found id: ""
	I1026 09:18:45.829798  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.829806  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:45.829812  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:45.829873  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:45.862647  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:45.862673  445201 cri.go:89] found id: ""
	I1026 09:18:45.862682  445201 logs.go:282] 1 containers: [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:45.862763  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:45.867860  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:45.867935  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:45.903841  445201 cri.go:89] found id: ""
	I1026 09:18:45.903870  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.903879  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:45.903892  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:45.903952  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:45.931348  445201 cri.go:89] found id: ""
	I1026 09:18:45.931377  445201 logs.go:282] 0 containers: []
	W1026 09:18:45.931386  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:45.931401  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:45.931412  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:45.976805  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:45.976843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:46.051944  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:46.051978  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:46.086154  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:46.086182  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:46.180461  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:46.180500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:46.242681  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:46.242729  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:46.335607  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:46.335630  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:46.335644  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:46.372229  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:46.372259  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:46.576414  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:46.576452  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:46.593353  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:46.593384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:49.196673  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:49.208768  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:49.208834  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:49.237992  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:49.238012  445201 cri.go:89] found id: ""
	I1026 09:18:49.238026  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:49.238078  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.242232  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:49.242301  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:49.274115  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:49.274134  445201 cri.go:89] found id: ""
	I1026 09:18:49.274141  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:49.274196  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.278751  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:49.278824  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:49.316174  445201 cri.go:89] found id: ""
	I1026 09:18:49.316247  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.316268  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:49.316286  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:49.316376  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:49.364077  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:49.364097  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:49.364102  445201 cri.go:89] found id: ""
	I1026 09:18:49.364110  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:49.364167  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.368007  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.371900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:49.371969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:49.480960  445201 cri.go:89] found id: ""
	I1026 09:18:49.480983  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.480991  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:49.480997  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:49.481051  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:49.573122  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:49.573141  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:49.573146  445201 cri.go:89] found id: ""
	I1026 09:18:49.573153  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:49.573207  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.582516  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:49.586286  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:49.586360  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:49.653326  445201 cri.go:89] found id: ""
	I1026 09:18:49.653348  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.653356  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:49.653362  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:49.653419  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:49.753083  445201 cri.go:89] found id: ""
	I1026 09:18:49.753104  445201 logs.go:282] 0 containers: []
	W1026 09:18:49.753112  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:49.753121  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:49.753133  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:49.868436  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:49.868509  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:50.107262  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:50.107333  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:50.107360  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:50.218345  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:50.218421  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:50.300338  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:50.300415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:50.248348  468013 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:18:51.215071  468013 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:18:51.480906  468013 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:18:51.481215  468013 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:18:52.535962  468013 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:18:52.911823  468013 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:18:53.772524  468013 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:18:53.930856  468013 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:18:54.375127  468013 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:18:54.376033  468013 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:18:54.378848  468013 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:18:50.451266  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:50.451352  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:50.702844  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:50.702924  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:50.761113  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:50.761141  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:50.992387  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:50.992470  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:51.164309  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:51.164354  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:51.244398  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:51.244428  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:53.918830  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:53.931665  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:53.931737  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:53.965110  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:53.965128  445201 cri.go:89] found id: ""
	I1026 09:18:53.965136  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:53.965189  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:53.969286  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:53.969364  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:53.998070  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:53.998148  445201 cri.go:89] found id: ""
	I1026 09:18:53.998172  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:53.998300  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.009478  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:54.009549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:54.057681  445201 cri.go:89] found id: ""
	I1026 09:18:54.057704  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.057713  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:54.057719  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:54.057779  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:54.096667  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:54.096686  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:54.096690  445201 cri.go:89] found id: ""
	I1026 09:18:54.096697  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:54.096754  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.100714  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.106334  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:54.106412  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:54.142903  445201 cri.go:89] found id: ""
	I1026 09:18:54.142932  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.142942  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:54.142949  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:54.143009  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:54.178911  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:54.178941  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:54.178947  445201 cri.go:89] found id: ""
	I1026 09:18:54.178957  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:54.179025  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.183501  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:54.187589  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:54.187669  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:54.221295  445201 cri.go:89] found id: ""
	I1026 09:18:54.221331  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.221339  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:54.221347  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:54.221418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:54.257120  445201 cri.go:89] found id: ""
	I1026 09:18:54.257156  445201 logs.go:282] 0 containers: []
	W1026 09:18:54.257165  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:54.257175  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:54.257187  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:54.274116  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:54.274152  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:54.367913  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:54.367953  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:54.424863  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:54.424897  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:18:54.522786  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:54.522824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:54.558996  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:54.559027  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:54.761276  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:54.761314  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:54.842974  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:54.842998  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:54.843013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:54.883224  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:54.883258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:54.959058  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:54.959095  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:54.990575  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:54.990606  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:54.382418  468013 out.go:252]   - Booting up control plane ...
	I1026 09:18:54.382530  468013 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:18:54.382610  468013 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:18:54.383477  468013 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:18:54.412974  468013 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:18:54.413356  468013 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:18:54.421759  468013 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:18:54.422301  468013 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:18:54.422510  468013 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:18:54.593750  468013 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 09:18:54.593865  468013 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 09:18:55.596570  468013 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001926296s
	I1026 09:18:55.600215  468013 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:18:55.600307  468013 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 09:18:55.600573  468013 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:18:55.600658  468013 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 09:18:57.523986  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:18:57.540088  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:18:57.540158  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:18:57.588980  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:57.589003  445201 cri.go:89] found id: ""
	I1026 09:18:57.589011  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:18:57.589068  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.592926  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:18:57.593004  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:18:57.636503  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:57.636526  445201 cri.go:89] found id: ""
	I1026 09:18:57.636535  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:18:57.636593  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.641341  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:18:57.641415  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:18:57.685033  445201 cri.go:89] found id: ""
	I1026 09:18:57.685059  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.685068  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:18:57.685075  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:18:57.685131  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:18:57.711978  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:57.712001  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:57.712007  445201 cri.go:89] found id: ""
	I1026 09:18:57.712014  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:18:57.712080  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.715873  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.719254  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:18:57.719321  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:18:57.753685  445201 cri.go:89] found id: ""
	I1026 09:18:57.753710  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.753718  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:18:57.753725  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:18:57.753778  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:18:57.784895  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:57.784916  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:57.784921  445201 cri.go:89] found id: ""
	I1026 09:18:57.784929  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:18:57.784983  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.788971  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:18:57.792746  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:18:57.792820  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:18:57.840896  445201 cri.go:89] found id: ""
	I1026 09:18:57.840923  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.840933  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:18:57.840939  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:18:57.841003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:18:57.880935  445201 cri.go:89] found id: ""
	I1026 09:18:57.880960  445201 logs.go:282] 0 containers: []
	W1026 09:18:57.880969  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:18:57.880978  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:18:57.880990  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:18:58.119061  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:18:58.119100  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:18:58.144002  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:18:58.144032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:18:58.272226  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:18:58.272262  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:18:58.338046  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:18:58.338081  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:18:58.468726  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:18:58.468824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:18:58.537377  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:18:58.537418  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:18:58.581358  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:18:58.581384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:18:58.626891  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:18:58.626920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:18:58.765686  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:18:58.765706  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:18:58.765720  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:18:58.802844  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:18:58.802873  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:00.515681  468013 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.914956193s
	I1026 09:19:00.802956  468013 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.202671584s
	I1026 09:19:02.601425  468013 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.0011542s
	I1026 09:19:02.629042  468013 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:19:02.658378  468013 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:19:02.674178  468013 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:19:02.674383  468013 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-375355 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:19:02.692001  468013 kubeadm.go:318] [bootstrap-token] Using token: e15dsg.w5bo9j6k93ekgd4v
	I1026 09:19:02.695048  468013 out.go:252]   - Configuring RBAC rules ...
	I1026 09:19:02.695182  468013 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:19:02.708631  468013 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:19:02.717425  468013 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:19:02.724210  468013 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:19:02.730449  468013 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:19:02.743932  468013 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:19:03.010582  468013 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:19:03.439797  468013 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:19:04.009544  468013 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:19:04.010616  468013 kubeadm.go:318] 
	I1026 09:19:04.010697  468013 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:19:04.010702  468013 kubeadm.go:318] 
	I1026 09:19:04.010828  468013 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:19:04.010832  468013 kubeadm.go:318] 
	I1026 09:19:04.010858  468013 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:19:04.010919  468013 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:19:04.010970  468013 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:19:04.010974  468013 kubeadm.go:318] 
	I1026 09:19:04.011029  468013 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:19:04.011032  468013 kubeadm.go:318] 
	I1026 09:19:04.011081  468013 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:19:04.011085  468013 kubeadm.go:318] 
	I1026 09:19:04.011140  468013 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:19:04.011218  468013 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:19:04.011288  468013 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:19:04.011294  468013 kubeadm.go:318] 
	I1026 09:19:04.011381  468013 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:19:04.011461  468013 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:19:04.011464  468013 kubeadm.go:318] 
	I1026 09:19:04.011551  468013 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e15dsg.w5bo9j6k93ekgd4v \
	I1026 09:19:04.011659  468013 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:19:04.011679  468013 kubeadm.go:318] 	--control-plane 
	I1026 09:19:04.011682  468013 kubeadm.go:318] 
	I1026 09:19:04.011769  468013 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:19:04.011773  468013 kubeadm.go:318] 
	I1026 09:19:04.011857  468013 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e15dsg.w5bo9j6k93ekgd4v \
	I1026 09:19:04.011963  468013 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:19:04.016229  468013 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:19:04.016473  468013 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:19:04.016589  468013 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:19:04.016642  468013 cni.go:84] Creating CNI manager for ""
	I1026 09:19:04.016649  468013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:19:04.019993  468013 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:19:04.023024  468013 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:19:04.027387  468013 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:19:04.027398  468013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:19:04.042841  468013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:19:04.324768  468013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:19:04.324920  468013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:19:04.325000  468013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-375355 minikube.k8s.io/updated_at=2025_10_26T09_19_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=cert-expiration-375355 minikube.k8s.io/primary=true
	I1026 09:19:04.335818  468013 ops.go:34] apiserver oom_adj: -16
	I1026 09:19:04.513525  468013 kubeadm.go:1113] duration metric: took 188.651608ms to wait for elevateKubeSystemPrivileges
	I1026 09:19:04.513542  468013 kubeadm.go:402] duration metric: took 18.572478201s to StartCluster
	I1026 09:19:04.513558  468013 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:19:04.513619  468013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:19:04.514631  468013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:19:04.514860  468013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:19:04.514866  468013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:19:04.515138  468013 config.go:182] Loaded profile config "cert-expiration-375355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:19:04.515174  468013 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:19:04.515232  468013 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-375355"
	I1026 09:19:04.515247  468013 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-375355"
	I1026 09:19:04.515273  468013 host.go:66] Checking if "cert-expiration-375355" exists ...
	I1026 09:19:04.515960  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:19:04.516543  468013 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-375355"
	I1026 09:19:04.516560  468013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-375355"
	I1026 09:19:04.516833  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:19:04.518039  468013 out.go:179] * Verifying Kubernetes components...
	I1026 09:19:04.521986  468013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:19:04.571400  468013 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-375355"
	I1026 09:19:04.571433  468013 host.go:66] Checking if "cert-expiration-375355" exists ...
	I1026 09:19:04.571888  468013 cli_runner.go:164] Run: docker container inspect cert-expiration-375355 --format={{.State.Status}}
	I1026 09:19:04.574331  468013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:19:01.423814  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:01.435792  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:01.435861  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:01.469223  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:01.469246  445201 cri.go:89] found id: ""
	I1026 09:19:01.469255  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:01.469314  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.473717  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:01.473784  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:01.507729  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:01.507750  445201 cri.go:89] found id: ""
	I1026 09:19:01.507759  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:01.507814  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.512111  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:01.512179  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:01.540080  445201 cri.go:89] found id: ""
	I1026 09:19:01.540105  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.540119  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:01.540126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:01.540182  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:01.568546  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:01.568565  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:01.568570  445201 cri.go:89] found id: ""
	I1026 09:19:01.568577  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:01.568628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.572626  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.576629  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:01.576695  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:01.605879  445201 cri.go:89] found id: ""
	I1026 09:19:01.605903  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.605911  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:01.605918  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:01.605973  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:01.670518  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:01.670540  445201 cri.go:89] found id: "211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:19:01.670546  445201 cri.go:89] found id: ""
	I1026 09:19:01.670565  445201 logs.go:282] 2 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad]
	I1026 09:19:01.670628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.683230  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:01.687508  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:01.687586  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:01.744856  445201 cri.go:89] found id: ""
	I1026 09:19:01.744881  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.744891  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:01.744903  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:01.744969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:01.809189  445201 cri.go:89] found id: ""
	I1026 09:19:01.809215  445201 logs.go:282] 0 containers: []
	W1026 09:19:01.809226  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:01.809235  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:01.809259  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:01.853086  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:01.853113  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:01.952100  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:01.952140  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:01.971357  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:01.971388  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:02.113349  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:02.113389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:02.189865  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:02.189898  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:02.222701  445201 logs.go:123] Gathering logs for kube-controller-manager [211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad] ...
	I1026 09:19:02.222762  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 211b28dcbd74e648bdc31fe2f4fa24a97b10ed5ba9860c41eb8fb41572319cad"
	I1026 09:19:02.251641  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:02.251668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:02.305611  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:02.305673  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:02.502516  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:02.502556  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:02.604316  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:02.604337  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:02.604349  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.157227  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:05.172427  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:05.172497  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:05.212673  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:05.212695  445201 cri.go:89] found id: ""
	I1026 09:19:05.212703  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:05.212762  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.218913  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:05.218988  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:05.266883  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.266951  445201 cri.go:89] found id: ""
	I1026 09:19:05.266976  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:05.267067  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.273271  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:05.273341  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:05.328492  445201 cri.go:89] found id: ""
	I1026 09:19:05.328515  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.328524  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:05.328530  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:05.328589  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:05.361661  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:05.361681  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:05.361686  445201 cri.go:89] found id: ""
	I1026 09:19:05.361693  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:05.361750  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.366096  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.370154  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:05.370277  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:04.577354  468013 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:19:04.577366  468013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:19:04.577439  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:19:04.608157  468013 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:19:04.608170  468013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:19:04.608233  468013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-375355
	I1026 09:19:04.640422  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:19:04.645939  468013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/cert-expiration-375355/id_rsa Username:docker}
	I1026 09:19:04.783298  468013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:19:04.857184  468013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:19:04.924748  468013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:19:04.947386  468013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:19:05.268351  468013 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:19:05.268411  468013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:05.268558  468013 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 09:19:05.728816  468013 api_server.go:72] duration metric: took 1.213923734s to wait for apiserver process to appear ...
	I1026 09:19:05.728829  468013 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:19:05.728848  468013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:19:05.731720  468013 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 09:19:05.734511  468013 addons.go:514] duration metric: took 1.219318582s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 09:19:05.743739  468013 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 09:19:05.745607  468013 api_server.go:141] control plane version: v1.34.1
	I1026 09:19:05.745622  468013 api_server.go:131] duration metric: took 16.787177ms to wait for apiserver health ...
	I1026 09:19:05.745630  468013 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:19:05.757725  468013 system_pods.go:59] 5 kube-system pods found
	I1026 09:19:05.757750  468013 system_pods.go:61] "etcd-cert-expiration-375355" [bf2c8dfb-e3f3-4d1c-972c-e92f14346fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:19:05.757756  468013 system_pods.go:61] "kube-apiserver-cert-expiration-375355" [f2646714-a981-4112-8a81-3a9cb8c50ed6] Running
	I1026 09:19:05.757764  468013 system_pods.go:61] "kube-controller-manager-cert-expiration-375355" [b7c0e01a-47db-4eaa-8049-4229915bfa47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:19:05.757770  468013 system_pods.go:61] "kube-scheduler-cert-expiration-375355" [772eb4a0-75e9-4d78-9253-42097b3615d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:19:05.757775  468013 system_pods.go:61] "storage-provisioner" [adc24942-6237-419c-8cc4-c2f613585404] Pending
	I1026 09:19:05.757780  468013 system_pods.go:74] duration metric: took 12.14625ms to wait for pod list to return data ...
	I1026 09:19:05.757790  468013 kubeadm.go:586] duration metric: took 1.242903636s to wait for: map[apiserver:true system_pods:true]
	I1026 09:19:05.757802  468013 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:19:05.766448  468013 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:19:05.766467  468013 node_conditions.go:123] node cpu capacity is 2
	I1026 09:19:05.766479  468013 node_conditions.go:105] duration metric: took 8.673149ms to run NodePressure ...
	I1026 09:19:05.766491  468013 start.go:241] waiting for startup goroutines ...
	I1026 09:19:05.777956  468013 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-375355" context rescaled to 1 replicas
	I1026 09:19:05.777977  468013 start.go:246] waiting for cluster config update ...
	I1026 09:19:05.777988  468013 start.go:255] writing updated cluster config ...
	I1026 09:19:05.778279  468013 ssh_runner.go:195] Run: rm -f paused
	I1026 09:19:05.855497  468013 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:19:05.860522  468013 out.go:179] * Done! kubectl is now configured to use "cert-expiration-375355" cluster and "default" namespace by default
	I1026 09:19:05.429405  445201 cri.go:89] found id: ""
	I1026 09:19:05.429428  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.429438  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:05.429444  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:05.429503  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:05.459155  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:05.459177  445201 cri.go:89] found id: ""
	I1026 09:19:05.459185  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:05.459251  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:05.463412  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:05.463490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:05.497816  445201 cri.go:89] found id: ""
	I1026 09:19:05.497898  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.497922  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:05.497944  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:05.498044  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:05.547135  445201 cri.go:89] found id: ""
	I1026 09:19:05.547157  445201 logs.go:282] 0 containers: []
	W1026 09:19:05.547165  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:05.547180  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:05.547191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:05.765529  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:05.765619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:05.897332  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:05.897415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:05.942114  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:05.942147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:06.026733  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:06.026772  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:06.058485  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:06.058517  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:06.142317  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:06.142354  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:06.173836  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:06.173866  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:06.191159  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:06.191191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:06.262250  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:06.262269  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:06.262283  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:08.793456  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:08.806010  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:08.806104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:08.832478  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:08.832501  445201 cri.go:89] found id: ""
	I1026 09:19:08.832510  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:08.832592  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.836764  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:08.836886  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:08.864680  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:08.864703  445201 cri.go:89] found id: ""
	I1026 09:19:08.864713  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:08.864790  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.868822  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:08.868922  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:08.915861  445201 cri.go:89] found id: ""
	I1026 09:19:08.915885  445201 logs.go:282] 0 containers: []
	W1026 09:19:08.915894  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:08.915900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:08.915956  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:08.958216  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:08.958236  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:08.958241  445201 cri.go:89] found id: ""
	I1026 09:19:08.958248  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:08.958304  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.964680  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:08.969953  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:08.970071  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:09.018018  445201 cri.go:89] found id: ""
	I1026 09:19:09.018107  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.018130  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:09.018163  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:09.018284  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:09.059907  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:09.059987  445201 cri.go:89] found id: ""
	I1026 09:19:09.060010  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:09.060116  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:09.065304  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:09.065406  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:09.103164  445201 cri.go:89] found id: ""
	I1026 09:19:09.103189  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.103198  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:09.103205  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:09.103317  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:09.138529  445201 cri.go:89] found id: ""
	I1026 09:19:09.138555  445201 logs.go:282] 0 containers: []
	W1026 09:19:09.138564  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:09.138579  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:09.138612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:09.228162  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:09.228184  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:09.228200  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:09.297498  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:09.297576  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:09.330960  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:09.331040  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:09.358609  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:09.358678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:09.411374  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:09.411410  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:09.646481  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:09.646521  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:09.663058  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:09.663088  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:09.752277  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:09.752366  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:09.793332  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:09.793367  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:12.383633  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:12.401752  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:12.401839  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:12.438665  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:12.438686  445201 cri.go:89] found id: ""
	I1026 09:19:12.438694  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:12.438783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.443593  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:12.443675  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:12.472091  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:12.472111  445201 cri.go:89] found id: ""
	I1026 09:19:12.472120  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:12.472180  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.475808  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:12.475886  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:12.514267  445201 cri.go:89] found id: ""
	I1026 09:19:12.514294  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.514304  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:12.514309  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:12.514366  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:12.542080  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:12.542104  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:12.542110  445201 cri.go:89] found id: ""
	I1026 09:19:12.542117  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:12.542174  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.546039  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.549615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:12.549723  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:12.581223  445201 cri.go:89] found id: ""
	I1026 09:19:12.581250  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.581260  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:12.581266  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:12.581356  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:12.607499  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:12.607534  445201 cri.go:89] found id: ""
	I1026 09:19:12.607543  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:12.607617  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:12.611325  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:12.611402  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:12.641240  445201 cri.go:89] found id: ""
	I1026 09:19:12.641265  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.641275  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:12.641281  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:12.641337  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:12.667996  445201 cri.go:89] found id: ""
	I1026 09:19:12.668022  445201 logs.go:282] 0 containers: []
	W1026 09:19:12.668031  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:12.668052  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:12.668067  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:12.684406  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:12.684484  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:12.724395  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:12.724427  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:12.789568  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:12.789604  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:12.875805  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:12.875841  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:12.951166  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:12.951186  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:12.951198  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:13.039548  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:13.039585  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:13.068138  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:13.068168  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:13.097855  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:13.097884  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:13.131626  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:13.131654  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:15.821288  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:15.832161  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:15.832241  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:15.861216  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:15.861236  445201 cri.go:89] found id: ""
	I1026 09:19:15.861244  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:15.861297  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.865258  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:15.865335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:15.896097  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:15.896120  445201 cri.go:89] found id: ""
	I1026 09:19:15.896129  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:15.896210  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.899835  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:15.899910  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:15.926309  445201 cri.go:89] found id: ""
	I1026 09:19:15.926336  445201 logs.go:282] 0 containers: []
	W1026 09:19:15.926345  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:15.926351  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:15.926409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:15.952777  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:15.952801  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:15.952806  445201 cri.go:89] found id: ""
	I1026 09:19:15.952812  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:15.952870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.956680  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:15.960135  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:15.960205  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:15.990023  445201 cri.go:89] found id: ""
	I1026 09:19:15.990049  445201 logs.go:282] 0 containers: []
	W1026 09:19:15.990058  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:15.990064  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:15.990128  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:16.021736  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:16.021812  445201 cri.go:89] found id: ""
	I1026 09:19:16.021845  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:16.021940  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:16.025743  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:16.025814  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:16.053000  445201 cri.go:89] found id: ""
	I1026 09:19:16.053027  445201 logs.go:282] 0 containers: []
	W1026 09:19:16.053037  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:16.053043  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:16.053104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:16.080003  445201 cri.go:89] found id: ""
	I1026 09:19:16.080063  445201 logs.go:282] 0 containers: []
	W1026 09:19:16.080072  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:16.080087  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:16.080104  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:16.264436  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:16.264474  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:16.332840  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:16.332859  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:16.332876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:16.415544  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:16.415580  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:16.451127  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:16.451154  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:16.467646  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:16.467674  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:16.558547  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:16.558590  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:16.603633  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:16.603718  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:16.695146  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:16.695186  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:16.724106  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:16.724133  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:19.254855  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:19.265829  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:19.265901  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:19.295749  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:19.295772  445201 cri.go:89] found id: ""
	I1026 09:19:19.295780  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:19.295834  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.299585  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:19.299655  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:19.325212  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:19.325233  445201 cri.go:89] found id: ""
	I1026 09:19:19.325242  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:19.325298  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.328922  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:19.328992  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:19.356309  445201 cri.go:89] found id: ""
	I1026 09:19:19.356334  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.356342  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:19.356352  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:19.356411  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:19.384013  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:19.384044  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:19.384049  445201 cri.go:89] found id: ""
	I1026 09:19:19.384056  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:19.384115  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.392486  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.396329  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:19.396402  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:19.424188  445201 cri.go:89] found id: ""
	I1026 09:19:19.424218  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.424227  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:19.424233  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:19.424313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:19.452797  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:19.452836  445201 cri.go:89] found id: ""
	I1026 09:19:19.452845  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:19.452959  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:19.456593  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:19.456688  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:19.481805  445201 cri.go:89] found id: ""
	I1026 09:19:19.481832  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.481841  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:19.481848  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:19.481908  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:19.522358  445201 cri.go:89] found id: ""
	I1026 09:19:19.522383  445201 logs.go:282] 0 containers: []
	W1026 09:19:19.522391  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:19.522406  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:19.522421  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:19.616560  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:19.616601  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:19.651182  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:19.651248  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:19.679075  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:19.679105  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:19.711689  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:19.711717  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:19.902761  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:19.902799  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:19.920018  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:19.920109  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:19.991407  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:19.991449  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:20.025552  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:20.025583  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:20.111969  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:20.112009  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:20.183281  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:22.683932  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:22.694857  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:22.694924  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:22.723534  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:22.723570  445201 cri.go:89] found id: ""
	I1026 09:19:22.723579  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:22.723652  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.727309  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:22.727383  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:22.755329  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:22.755351  445201 cri.go:89] found id: ""
	I1026 09:19:22.755359  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:22.755418  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.759291  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:22.759367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:22.786028  445201 cri.go:89] found id: ""
	I1026 09:19:22.786054  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.786062  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:22.786068  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:22.786127  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:22.814234  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:22.814255  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:22.814260  445201 cri.go:89] found id: ""
	I1026 09:19:22.814267  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:22.814329  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.818126  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.821666  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:22.821738  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:22.849444  445201 cri.go:89] found id: ""
	I1026 09:19:22.849467  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.849475  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:22.849481  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:22.849541  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:22.881184  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:22.881204  445201 cri.go:89] found id: ""
	I1026 09:19:22.881212  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:22.881269  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:22.885355  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:22.885474  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:22.915425  445201 cri.go:89] found id: ""
	I1026 09:19:22.915449  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.915459  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:22.915465  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:22.915521  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:22.945808  445201 cri.go:89] found id: ""
	I1026 09:19:22.945835  445201 logs.go:282] 0 containers: []
	W1026 09:19:22.945845  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:22.945861  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:22.945872  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:22.981028  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:22.981063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:23.062992  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:23.063030  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:23.095047  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:23.095074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:23.176099  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:23.176135  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:23.207189  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:23.207220  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:23.224316  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:23.224349  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:23.254199  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:23.254229  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:23.445110  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:23.445146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:23.521902  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:23.521925  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:23.521938  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.118456  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:26.129332  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:26.129409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:26.156225  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.156252  445201 cri.go:89] found id: ""
	I1026 09:19:26.156261  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:26.156318  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.160359  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:26.160433  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:26.187545  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:26.187623  445201 cri.go:89] found id: ""
	I1026 09:19:26.187646  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:26.187734  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.191452  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:26.191535  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:26.220053  445201 cri.go:89] found id: ""
	I1026 09:19:26.220083  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.220092  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:26.220098  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:26.220157  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:26.247035  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:26.247061  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:26.247066  445201 cri.go:89] found id: ""
	I1026 09:19:26.247074  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:26.247128  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.251068  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.254617  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:26.254791  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:26.280418  445201 cri.go:89] found id: ""
	I1026 09:19:26.280444  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.280453  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:26.280460  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:26.280540  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:26.307133  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:26.307155  445201 cri.go:89] found id: ""
	I1026 09:19:26.307164  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:26.307218  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:26.310730  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:26.310805  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:26.338848  445201 cri.go:89] found id: ""
	I1026 09:19:26.338933  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.338965  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:26.338990  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:26.339074  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:26.368727  445201 cri.go:89] found id: ""
	I1026 09:19:26.368802  445201 logs.go:282] 0 containers: []
	W1026 09:19:26.368817  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:26.368835  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:26.368846  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:26.557443  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:26.557484  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:26.626887  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:26.626911  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:26.626938  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:26.715465  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:26.715506  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:26.749738  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:26.749773  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:26.817294  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:26.817328  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:26.845295  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:26.845320  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:26.927592  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:26.927623  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:26.958755  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:26.958784  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:26.975590  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:26.975619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:29.505512  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:29.517231  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:29.517298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:29.549241  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:29.549265  445201 cri.go:89] found id: ""
	I1026 09:19:29.549284  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:29.549352  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.553401  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:29.553473  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:29.585756  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:29.585779  445201 cri.go:89] found id: ""
	I1026 09:19:29.585787  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:29.585855  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.589981  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:29.590059  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:29.620889  445201 cri.go:89] found id: ""
	I1026 09:19:29.620976  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.621000  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:29.621030  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:29.621108  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:29.656684  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:29.656707  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:29.656712  445201 cri.go:89] found id: ""
	I1026 09:19:29.656720  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:29.656775  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.660970  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.664791  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:29.664911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:29.692679  445201 cri.go:89] found id: ""
	I1026 09:19:29.692706  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.692715  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:29.692722  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:29.692780  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:29.723291  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:29.723314  445201 cri.go:89] found id: ""
	I1026 09:19:29.723323  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:29.723384  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:29.727266  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:29.727382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:29.764602  445201 cri.go:89] found id: ""
	I1026 09:19:29.764625  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.764634  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:29.764641  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:29.764698  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:29.791509  445201 cri.go:89] found id: ""
	I1026 09:19:29.791541  445201 logs.go:282] 0 containers: []
	W1026 09:19:29.791551  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:29.791570  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:29.791581  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:29.986761  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:29.986809  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:30.021126  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:30.021163  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:30.115835  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:30.115876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:30.206256  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:30.206351  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:30.276098  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:30.276165  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:30.276185  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:30.378640  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:30.378680  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:30.418069  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:30.418104  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:30.453863  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:30.453897  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:30.485272  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:30.485304  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:33.030943  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:33.043496  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:33.043569  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:33.072212  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:33.072236  445201 cri.go:89] found id: ""
	I1026 09:19:33.072244  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:33.072323  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.076809  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:33.076918  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:33.104618  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:33.104685  445201 cri.go:89] found id: ""
	I1026 09:19:33.104707  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:33.104785  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.108546  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:33.108616  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:33.135652  445201 cri.go:89] found id: ""
	I1026 09:19:33.135691  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.135700  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:33.135707  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:33.135774  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:33.168721  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:33.168744  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:33.168750  445201 cri.go:89] found id: ""
	I1026 09:19:33.168757  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:33.168812  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.172699  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.176639  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:33.176717  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:33.207086  445201 cri.go:89] found id: ""
	I1026 09:19:33.207111  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.207120  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:33.207126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:33.207186  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:33.234147  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:33.234171  445201 cri.go:89] found id: ""
	I1026 09:19:33.234182  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:33.234237  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:33.238290  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:33.238365  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:33.265342  445201 cri.go:89] found id: ""
	I1026 09:19:33.265379  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.265388  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:33.265394  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:33.265496  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:33.291779  445201 cri.go:89] found id: ""
	I1026 09:19:33.291806  445201 logs.go:282] 0 containers: []
	W1026 09:19:33.291814  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:33.291829  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:33.291842  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:33.326396  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:33.326429  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:33.398166  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:33.398201  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:33.436776  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:33.436804  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:33.455157  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:33.455187  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:33.550806  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:33.550829  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:33.550843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:33.658497  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:33.658534  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:33.690020  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:33.690046  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:33.719896  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:33.719927  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:33.806073  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:33.806113  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:36.509281  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:36.520622  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:36.520698  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:36.546223  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:36.546246  445201 cri.go:89] found id: ""
	I1026 09:19:36.546254  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:36.546310  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.549997  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:36.550075  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:36.583408  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:36.583435  445201 cri.go:89] found id: ""
	I1026 09:19:36.583444  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:36.583508  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.587184  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:36.587252  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:36.614029  445201 cri.go:89] found id: ""
	I1026 09:19:36.614052  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.614060  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:36.614067  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:36.614124  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:36.646229  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:36.646249  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:36.646254  445201 cri.go:89] found id: ""
	I1026 09:19:36.646262  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:36.646314  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.650036  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.653367  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:36.653435  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:36.678532  445201 cri.go:89] found id: ""
	I1026 09:19:36.678555  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.678565  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:36.678571  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:36.678627  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:36.706608  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:36.706632  445201 cri.go:89] found id: ""
	I1026 09:19:36.706650  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:36.706733  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:36.710435  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:36.710503  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:36.741337  445201 cri.go:89] found id: ""
	I1026 09:19:36.741363  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.741372  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:36.741378  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:36.741440  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:36.770349  445201 cri.go:89] found id: ""
	I1026 09:19:36.770376  445201 logs.go:282] 0 containers: []
	W1026 09:19:36.770385  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:36.770404  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:36.770415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:36.956331  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:36.956374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:37.033188  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:37.033220  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:37.033232  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:37.069780  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:37.069816  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:37.097772  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:37.097802  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:37.125526  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:37.125553  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:37.209767  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:37.209804  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:37.227966  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:37.227996  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:37.318972  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:37.319009  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:37.389310  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:37.389347  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:39.920329  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:39.934448  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:39.934519  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:39.961696  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:39.961730  445201 cri.go:89] found id: ""
	I1026 09:19:39.961739  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:39.961797  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:39.966091  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:39.966171  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:39.994339  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:39.994363  445201 cri.go:89] found id: ""
	I1026 09:19:39.994371  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:39.994429  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:39.998234  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:39.998326  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:40.038519  445201 cri.go:89] found id: ""
	I1026 09:19:40.038548  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.038557  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:40.038563  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:40.038645  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:40.073675  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:40.073700  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:40.073706  445201 cri.go:89] found id: ""
	I1026 09:19:40.073714  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:40.073772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.077812  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.081791  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:40.081865  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:40.112934  445201 cri.go:89] found id: ""
	I1026 09:19:40.112961  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.112976  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:40.112983  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:40.113048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:40.141045  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:40.141066  445201 cri.go:89] found id: ""
	I1026 09:19:40.141074  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:40.141157  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:40.145319  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:40.145398  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:40.175080  445201 cri.go:89] found id: ""
	I1026 09:19:40.175106  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.175114  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:40.175120  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:40.175222  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:40.222953  445201 cri.go:89] found id: ""
	I1026 09:19:40.222977  445201 logs.go:282] 0 containers: []
	W1026 09:19:40.222986  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:40.223000  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:40.223010  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:40.240693  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:40.240723  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:40.311992  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:40.312012  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:40.312081  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:40.343513  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:40.343544  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:40.375636  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:40.375706  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:40.573545  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:40.573587  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:40.665188  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:40.665275  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:40.706956  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:40.706990  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:40.778824  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:40.778859  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:40.871288  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:40.871333  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:43.403814  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:43.417134  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:43.417234  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:43.451712  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:43.451735  445201 cri.go:89] found id: ""
	I1026 09:19:43.451744  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:43.451805  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.456597  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:43.456744  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:43.487598  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:43.487621  445201 cri.go:89] found id: ""
	I1026 09:19:43.487630  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:43.487687  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.491554  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:43.491635  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:43.524925  445201 cri.go:89] found id: ""
	I1026 09:19:43.524950  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.524959  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:43.524966  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:43.525025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:43.559384  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:43.559409  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:43.559418  445201 cri.go:89] found id: ""
	I1026 09:19:43.559426  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:43.559505  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.563510  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.567393  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:43.567490  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:43.595768  445201 cri.go:89] found id: ""
	I1026 09:19:43.595796  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.595805  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:43.595811  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:43.595869  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:43.629403  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:43.629425  445201 cri.go:89] found id: ""
	I1026 09:19:43.629433  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:43.629511  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:43.633896  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:43.634000  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:43.662513  445201 cri.go:89] found id: ""
	I1026 09:19:43.662550  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.662560  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:43.662566  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:43.662632  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:43.691667  445201 cri.go:89] found id: ""
	I1026 09:19:43.691694  445201 logs.go:282] 0 containers: []
	W1026 09:19:43.691704  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:43.691720  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:43.691731  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:43.889884  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:43.889922  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:43.907194  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:43.907226  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:43.996451  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:43.996485  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:44.034823  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:44.034858  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:44.124289  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:44.124329  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:44.155342  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:44.155374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:44.228174  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:44.228197  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:44.228210  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:44.312702  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:44.312735  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:44.345172  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:44.345199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:46.872711  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:46.883842  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:46.883911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:46.914222  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:46.914244  445201 cri.go:89] found id: ""
	I1026 09:19:46.914252  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:46.914312  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:46.918073  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:46.918149  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:46.944474  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:46.944499  445201 cri.go:89] found id: ""
	I1026 09:19:46.944507  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:46.944563  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:46.948640  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:46.948760  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:46.975889  445201 cri.go:89] found id: ""
	I1026 09:19:46.975915  445201 logs.go:282] 0 containers: []
	W1026 09:19:46.975924  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:46.975930  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:46.975986  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:47.003949  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:47.003974  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:47.003979  445201 cri.go:89] found id: ""
	I1026 09:19:47.003987  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:47.004060  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.008583  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.012475  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:47.012598  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:47.038419  445201 cri.go:89] found id: ""
	I1026 09:19:47.038502  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.038535  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:47.038564  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:47.038654  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:47.067475  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:47.067540  445201 cri.go:89] found id: ""
	I1026 09:19:47.067563  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:47.067656  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:47.072094  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:47.072190  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:47.099129  445201 cri.go:89] found id: ""
	I1026 09:19:47.099155  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.099163  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:47.099169  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:47.099267  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:47.130045  445201 cri.go:89] found id: ""
	I1026 09:19:47.130112  445201 logs.go:282] 0 containers: []
	W1026 09:19:47.130136  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:47.130169  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:47.130199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:47.157890  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:47.157921  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:47.242542  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:47.242581  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:47.447551  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:47.447632  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:47.465004  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:47.465079  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:47.548858  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:47.548923  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:47.548950  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:47.646494  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:47.646529  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:47.686085  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:47.686117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:47.713578  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:47.713608  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:47.742696  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:47.742763  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:50.319936  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:50.331143  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:50.331216  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:50.357915  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:50.357939  445201 cri.go:89] found id: ""
	I1026 09:19:50.357947  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:50.358003  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.361808  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:50.361884  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:50.393021  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:50.393042  445201 cri.go:89] found id: ""
	I1026 09:19:50.393051  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:50.393104  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.396989  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:50.397060  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:50.424294  445201 cri.go:89] found id: ""
	I1026 09:19:50.424319  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.424328  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:50.424335  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:50.424395  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:50.450782  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:50.450802  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:50.450807  445201 cri.go:89] found id: ""
	I1026 09:19:50.450814  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:50.450870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.454550  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.458056  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:50.458158  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:50.485349  445201 cri.go:89] found id: ""
	I1026 09:19:50.485424  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.485447  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:50.485469  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:50.485558  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:50.521298  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:50.521363  445201 cri.go:89] found id: ""
	I1026 09:19:50.521385  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:50.521471  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:50.525189  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:50.525285  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:50.553967  445201 cri.go:89] found id: ""
	I1026 09:19:50.553992  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.554001  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:50.554008  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:50.554096  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:50.582250  445201 cri.go:89] found id: ""
	I1026 09:19:50.582274  445201 logs.go:282] 0 containers: []
	W1026 09:19:50.582283  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:50.582314  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:50.582336  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:50.599045  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:50.599077  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:50.691417  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:50.691478  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:50.691508  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:50.763525  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:50.763920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:50.795943  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:50.795968  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:50.878180  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:50.878219  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:50.921117  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:50.921146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:51.123124  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:51.123165  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:51.231502  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:51.231540  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:51.305423  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:51.305457  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:53.834574  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:53.848958  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:53.849029  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:53.876821  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:53.876841  445201 cri.go:89] found id: ""
	I1026 09:19:53.876849  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:53.876917  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.880939  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:53.881059  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:53.910223  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:53.910246  445201 cri.go:89] found id: ""
	I1026 09:19:53.910255  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:53.910316  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.914035  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:53.914114  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:53.940967  445201 cri.go:89] found id: ""
	I1026 09:19:53.940992  445201 logs.go:282] 0 containers: []
	W1026 09:19:53.941000  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:53.941006  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:53.941063  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:53.970066  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:53.970087  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:53.970103  445201 cri.go:89] found id: ""
	I1026 09:19:53.970112  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:53.970168  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.974073  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:53.977507  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:53.977582  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:54.006522  445201 cri.go:89] found id: ""
	I1026 09:19:54.006548  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.006557  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:54.006563  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:54.006628  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:54.036193  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:54.036214  445201 cri.go:89] found id: ""
	I1026 09:19:54.036222  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:54.036282  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:54.040972  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:54.041076  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:54.072998  445201 cri.go:89] found id: ""
	I1026 09:19:54.073024  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.073033  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:54.073039  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:54.073099  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:54.104988  445201 cri.go:89] found id: ""
	I1026 09:19:54.105024  445201 logs.go:282] 0 containers: []
	W1026 09:19:54.105033  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:54.105052  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:54.105063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:54.307387  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:54.307426  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:54.382000  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:54.382026  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:54.382041  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:54.476834  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:54.476903  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:54.533817  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:54.533854  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:54.562847  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:54.562932  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:54.594186  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:54.594216  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:54.611803  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:54.611835  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:54.696633  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:54.696668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:54.724258  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:54.724285  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:57.314536  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:19:57.325708  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:19:57.325832  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:19:57.352680  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:57.352702  445201 cri.go:89] found id: ""
	I1026 09:19:57.352710  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:19:57.352768  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.356627  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:19:57.356710  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:19:57.396056  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:57.396078  445201 cri.go:89] found id: ""
	I1026 09:19:57.396086  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:19:57.396176  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.400072  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:19:57.400145  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:19:57.427812  445201 cri.go:89] found id: ""
	I1026 09:19:57.427841  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.427850  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:19:57.427857  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:19:57.427917  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:19:57.456429  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:57.456452  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:57.456458  445201 cri.go:89] found id: ""
	I1026 09:19:57.456467  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:19:57.456525  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.460645  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.464436  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:19:57.464508  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:19:57.491704  445201 cri.go:89] found id: ""
	I1026 09:19:57.491741  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.491750  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:19:57.491756  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:19:57.491827  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:19:57.527829  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:19:57.527853  445201 cri.go:89] found id: ""
	I1026 09:19:57.527861  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:19:57.527940  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:19:57.531724  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:19:57.531837  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:19:57.560862  445201 cri.go:89] found id: ""
	I1026 09:19:57.560889  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.560897  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:19:57.560903  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:19:57.560963  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:19:57.587217  445201 cri.go:89] found id: ""
	I1026 09:19:57.587293  445201 logs.go:282] 0 containers: []
	W1026 09:19:57.587307  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:19:57.587323  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:19:57.587334  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:19:57.672410  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:19:57.672448  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:19:57.865888  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:19:57.865929  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:19:57.897311  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:19:57.897340  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:19:57.914172  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:19:57.914202  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:19:57.988120  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:19:57.988141  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:19:57.988154  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:19:58.089631  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:19:58.089672  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:19:58.126451  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:19:58.126482  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:19:58.201184  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:19:58.201222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:19:58.231003  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:19:58.231030  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:00.759063  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:00.772489  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:00.772571  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:00.802524  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:00.802602  445201 cri.go:89] found id: ""
	I1026 09:20:00.802623  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:00.802738  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.807352  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:00.807463  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:00.837109  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:00.837132  445201 cri.go:89] found id: ""
	I1026 09:20:00.837140  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:00.837201  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.841411  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:00.841639  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:00.871396  445201 cri.go:89] found id: ""
	I1026 09:20:00.871423  445201 logs.go:282] 0 containers: []
	W1026 09:20:00.871431  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:00.871438  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:00.871543  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:00.899767  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:00.899793  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:00.899798  445201 cri.go:89] found id: ""
	I1026 09:20:00.899806  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:00.899865  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.904037  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.908205  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:00.908287  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:00.943720  445201 cri.go:89] found id: ""
	I1026 09:20:00.943800  445201 logs.go:282] 0 containers: []
	W1026 09:20:00.943823  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:00.943845  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:00.943938  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:00.972167  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:00.972189  445201 cri.go:89] found id: ""
	I1026 09:20:00.972197  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:00.972304  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:00.976224  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:00.976301  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:01.008110  445201 cri.go:89] found id: ""
	I1026 09:20:01.008144  445201 logs.go:282] 0 containers: []
	W1026 09:20:01.008153  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:01.008159  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:01.008235  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:01.035472  445201 cri.go:89] found id: ""
	I1026 09:20:01.035514  445201 logs.go:282] 0 containers: []
	W1026 09:20:01.035522  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:01.035553  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:01.035574  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:01.053806  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:01.053888  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:01.143967  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:01.144012  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:01.181616  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:01.181654  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:01.257186  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:01.257222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:01.289204  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:01.289233  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:01.317688  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:01.317720  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:01.399424  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:01.399466  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:01.473543  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:01.473564  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:01.473577  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:01.516832  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:01.516871  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:04.205978  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:04.217084  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:04.217192  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:04.246438  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:04.246514  445201 cri.go:89] found id: ""
	I1026 09:20:04.246535  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:04.246609  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.253527  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:04.253653  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:04.281030  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:04.281108  445201 cri.go:89] found id: ""
	I1026 09:20:04.281123  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:04.281191  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.284980  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:04.285070  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:04.315436  445201 cri.go:89] found id: ""
	I1026 09:20:04.315464  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.315474  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:04.315480  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:04.315546  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:04.349347  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:04.349423  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:04.349443  445201 cri.go:89] found id: ""
	I1026 09:20:04.349467  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:04.349562  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.354196  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.358052  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:04.358166  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:04.389218  445201 cri.go:89] found id: ""
	I1026 09:20:04.389299  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.389321  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:04.389343  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:04.389437  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:04.416455  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:04.416492  445201 cri.go:89] found id: ""
	I1026 09:20:04.416501  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:04.416559  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:04.420377  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:04.420476  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:04.447634  445201 cri.go:89] found id: ""
	I1026 09:20:04.447657  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.447666  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:04.447673  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:04.447730  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:04.474156  445201 cri.go:89] found id: ""
	I1026 09:20:04.474181  445201 logs.go:282] 0 containers: []
	W1026 09:20:04.474190  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:04.474202  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:04.474214  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:04.524678  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:04.524712  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:04.556118  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:04.556147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:04.582653  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:04.582683  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:04.650076  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:04.650150  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:04.650176  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:04.740766  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:04.740805  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:04.813196  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:04.813231  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:04.901634  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:04.901670  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:04.932029  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:04.932061  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:05.133407  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:05.133449  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:07.655144  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:07.666212  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:07.666284  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:07.692697  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:07.692721  445201 cri.go:89] found id: ""
	I1026 09:20:07.692728  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:07.692783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.696542  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:07.696615  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:07.724049  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:07.724073  445201 cri.go:89] found id: ""
	I1026 09:20:07.724082  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:07.724141  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.727799  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:07.727875  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:07.753315  445201 cri.go:89] found id: ""
	I1026 09:20:07.753387  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.753410  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:07.753432  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:07.753546  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:07.784900  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:07.784930  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:07.784936  445201 cri.go:89] found id: ""
	I1026 09:20:07.784944  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:07.785019  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.789772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.793763  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:07.793838  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:07.824377  445201 cri.go:89] found id: ""
	I1026 09:20:07.824403  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.824412  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:07.824418  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:07.824477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:07.852049  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:07.852071  445201 cri.go:89] found id: ""
	I1026 09:20:07.852079  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:07.852135  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:07.855969  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:07.856048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:07.883689  445201 cri.go:89] found id: ""
	I1026 09:20:07.883764  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.883787  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:07.883810  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:07.883898  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:07.911108  445201 cri.go:89] found id: ""
	I1026 09:20:07.911142  445201 logs.go:282] 0 containers: []
	W1026 09:20:07.911151  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:07.911182  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:07.911203  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:07.927536  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:07.927563  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:07.955598  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:07.955626  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:08.043127  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:08.043175  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:08.239554  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:08.239595  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:08.313871  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:08.313891  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:08.313906  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:08.409288  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:08.409330  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:08.445453  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:08.445490  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:08.542001  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:08.542041  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:08.575896  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:08.575925  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:11.110016  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:11.123039  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:11.123119  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:11.156442  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:11.156519  445201 cri.go:89] found id: ""
	I1026 09:20:11.156536  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:11.156610  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.161156  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:11.161238  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:11.190856  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:11.190877  445201 cri.go:89] found id: ""
	I1026 09:20:11.190889  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:11.190947  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.194890  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:11.194965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:11.222317  445201 cri.go:89] found id: ""
	I1026 09:20:11.222401  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.222425  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:11.222448  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:11.222561  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:11.249996  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:11.250018  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:11.250023  445201 cri.go:89] found id: ""
	I1026 09:20:11.250030  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:11.250085  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.254563  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.258149  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:11.258249  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:11.285573  445201 cri.go:89] found id: ""
	I1026 09:20:11.285600  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.285609  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:11.285615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:11.285704  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:11.314585  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:11.314607  445201 cri.go:89] found id: ""
	I1026 09:20:11.314616  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:11.314692  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:11.318565  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:11.318667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:11.344845  445201 cri.go:89] found id: ""
	I1026 09:20:11.344875  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.344886  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:11.344892  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:11.344951  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:11.370675  445201 cri.go:89] found id: ""
	I1026 09:20:11.370699  445201 logs.go:282] 0 containers: []
	W1026 09:20:11.370707  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:11.370751  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:11.370766  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:11.463898  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:11.463935  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:11.548152  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:11.548191  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:11.591820  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:11.591850  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:11.783333  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:11.783374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:11.856907  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:11.856973  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:11.856994  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:11.903575  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:11.903607  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:11.981815  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:11.981852  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:12.011424  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:12.011457  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:12.041713  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:12.041742  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:14.559373  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:14.570544  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:14.570614  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:14.598242  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:14.598265  445201 cri.go:89] found id: ""
	I1026 09:20:14.598273  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:14.598328  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.602127  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:14.602212  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:14.629914  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:14.629941  445201 cri.go:89] found id: ""
	I1026 09:20:14.629950  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:14.630006  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.633949  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:14.634024  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:14.672189  445201 cri.go:89] found id: ""
	I1026 09:20:14.672215  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.672223  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:14.672229  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:14.672313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:14.698463  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:14.698487  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:14.698491  445201 cri.go:89] found id: ""
	I1026 09:20:14.698499  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:14.698554  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.702705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.706624  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:14.706810  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:14.734428  445201 cri.go:89] found id: ""
	I1026 09:20:14.734453  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.734462  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:14.734468  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:14.734525  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:14.761855  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:14.761878  445201 cri.go:89] found id: ""
	I1026 09:20:14.761887  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:14.761942  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:14.765711  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:14.765788  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:14.792195  445201 cri.go:89] found id: ""
	I1026 09:20:14.792230  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.792239  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:14.792245  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:14.792312  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:14.818583  445201 cri.go:89] found id: ""
	I1026 09:20:14.818610  445201 logs.go:282] 0 containers: []
	W1026 09:20:14.818619  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:14.818634  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:14.818646  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:14.897266  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:14.897302  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:14.979061  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:14.979102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:14.997088  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:14.997117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:15.090410  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:15.090433  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:15.090451  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:15.182335  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:15.182377  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:15.219240  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:15.219276  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:15.249714  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:15.249741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:15.280669  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:15.280700  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:15.326700  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:15.326748  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:18.018858  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:18.030254  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:18.030329  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:18.059128  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:18.059156  445201 cri.go:89] found id: ""
	I1026 09:20:18.059165  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:18.059233  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.063347  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:18.063426  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:18.094571  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:18.094597  445201 cri.go:89] found id: ""
	I1026 09:20:18.094615  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:18.094670  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.098487  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:18.098566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:18.125039  445201 cri.go:89] found id: ""
	I1026 09:20:18.125111  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.125135  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:18.125147  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:18.125223  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:18.151735  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:18.151756  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:18.151761  445201 cri.go:89] found id: ""
	I1026 09:20:18.151769  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:18.151830  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.155811  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.159470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:18.159610  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:18.184732  445201 cri.go:89] found id: ""
	I1026 09:20:18.184798  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.184821  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:18.184846  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:18.184911  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:18.211783  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:18.211804  445201 cri.go:89] found id: ""
	I1026 09:20:18.211813  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:18.211870  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:18.215473  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:18.215597  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:18.242266  445201 cri.go:89] found id: ""
	I1026 09:20:18.242293  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.242308  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:18.242345  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:18.242427  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:18.268622  445201 cri.go:89] found id: ""
	I1026 09:20:18.268645  445201 logs.go:282] 0 containers: []
	W1026 09:20:18.268654  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:18.268687  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:18.268708  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:18.352828  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:18.352866  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:18.400611  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:18.400642  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:18.492244  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:18.492284  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:18.546983  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:18.547061  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:18.641528  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:18.641568  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:18.857153  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:18.857195  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:18.874796  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:18.874829  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:18.941360  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:18.941387  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:18.941401  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:18.969642  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:18.969672  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.498770  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:21.512054  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:21.512166  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:21.539248  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:21.539270  445201 cri.go:89] found id: ""
	I1026 09:20:21.539278  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:21.539351  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.543915  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:21.543986  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:21.572592  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:21.572659  445201 cri.go:89] found id: ""
	I1026 09:20:21.572681  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:21.572775  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.576884  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:21.577007  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:21.609371  445201 cri.go:89] found id: ""
	I1026 09:20:21.609417  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.609427  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:21.609434  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:21.609511  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:21.636297  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:21.636322  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:21.636328  445201 cri.go:89] found id: ""
	I1026 09:20:21.636335  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:21.636391  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.640143  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.643865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:21.643965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:21.669821  445201 cri.go:89] found id: ""
	I1026 09:20:21.669863  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.669873  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:21.669879  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:21.669989  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:21.698658  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.698682  445201 cri.go:89] found id: ""
	I1026 09:20:21.698691  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:21.698778  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:21.703205  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:21.703276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:21.729633  445201 cri.go:89] found id: ""
	I1026 09:20:21.729657  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.729666  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:21.729672  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:21.729728  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:21.756700  445201 cri.go:89] found id: ""
	I1026 09:20:21.756724  445201 logs.go:282] 0 containers: []
	W1026 09:20:21.756733  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:21.756748  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:21.756760  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:21.828451  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:21.828488  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:21.859441  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:21.859471  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:21.886181  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:21.886209  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:21.917168  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:21.917198  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:22.111064  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:22.111108  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:22.130247  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:22.130274  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:22.201415  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:22.201433  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:22.201445  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:22.295598  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:22.295635  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:22.335068  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:22.335102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:24.923329  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:24.934499  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:24.934566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:24.965902  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:24.965921  445201 cri.go:89] found id: ""
	I1026 09:20:24.965930  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:24.965995  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:24.969939  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:24.970014  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:24.998458  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:24.998483  445201 cri.go:89] found id: ""
	I1026 09:20:24.998491  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:24.998566  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.012507  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:25.012667  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:25.051112  445201 cri.go:89] found id: ""
	I1026 09:20:25.051141  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.051151  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:25.051158  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:25.051275  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:25.086581  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:25.086663  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:25.086684  445201 cri.go:89] found id: ""
	I1026 09:20:25.086707  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:25.086829  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.091518  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.096321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:25.096464  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:25.131206  445201 cri.go:89] found id: ""
	I1026 09:20:25.131242  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.131251  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:25.131258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:25.131367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:25.163147  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:25.163180  445201 cri.go:89] found id: ""
	I1026 09:20:25.163189  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:25.163257  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:25.168126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:25.168263  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:25.197318  445201 cri.go:89] found id: ""
	I1026 09:20:25.197345  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.197354  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:25.197360  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:25.197459  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:25.225627  445201 cri.go:89] found id: ""
	I1026 09:20:25.225700  445201 logs.go:282] 0 containers: []
	W1026 09:20:25.225730  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:25.225765  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:25.225792  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:25.242503  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:25.242589  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:25.289744  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:25.289777  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:25.363241  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:25.363281  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:25.399205  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:25.399239  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:25.486729  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:25.486765  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:25.529849  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:25.529881  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:25.733698  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:25.733738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:25.811490  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:25.811518  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:25.811532  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:25.904834  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:25.904871  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:28.433045  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:28.444334  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:28.444403  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:28.474756  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:28.474776  445201 cri.go:89] found id: ""
	I1026 09:20:28.474784  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:28.474838  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.478560  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:28.478631  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:28.515060  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:28.515081  445201 cri.go:89] found id: ""
	I1026 09:20:28.515090  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:28.515145  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.518916  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:28.519004  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:28.559110  445201 cri.go:89] found id: ""
	I1026 09:20:28.559132  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.559140  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:28.559146  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:28.559204  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:28.586836  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:28.586915  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:28.586936  445201 cri.go:89] found id: ""
	I1026 09:20:28.586949  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:28.587008  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.590860  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.594865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:28.594933  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:28.620390  445201 cri.go:89] found id: ""
	I1026 09:20:28.620417  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.620426  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:28.620433  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:28.620543  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:28.652047  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:28.652069  445201 cri.go:89] found id: ""
	I1026 09:20:28.652077  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:28.652134  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:28.655864  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:28.655969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:28.681847  445201 cri.go:89] found id: ""
	I1026 09:20:28.681871  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.681880  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:28.681886  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:28.681991  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:28.708971  445201 cri.go:89] found id: ""
	I1026 09:20:28.708997  445201 logs.go:282] 0 containers: []
	W1026 09:20:28.709007  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:28.709056  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:28.709074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:28.904970  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:28.905007  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:28.990331  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:28.990369  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:29.021189  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:29.021218  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:29.039055  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:29.039085  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:29.109298  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:29.109321  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:29.109334  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:29.144937  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:29.144971  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:29.219064  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:29.219102  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:29.247935  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:29.248003  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:29.331292  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:29.331370  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:31.863132  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:31.874377  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:31.874449  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:31.901077  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:31.901097  445201 cri.go:89] found id: ""
	I1026 09:20:31.901106  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:31.901160  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.905381  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:31.905450  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:31.930663  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:31.930682  445201 cri.go:89] found id: ""
	I1026 09:20:31.930690  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:31.930770  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.934318  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:31.934429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:31.961825  445201 cri.go:89] found id: ""
	I1026 09:20:31.961848  445201 logs.go:282] 0 containers: []
	W1026 09:20:31.961857  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:31.961863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:31.961925  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:31.988820  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:31.988893  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:31.988905  445201 cri.go:89] found id: ""
	I1026 09:20:31.988912  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:31.988980  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.992783  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:31.996548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:31.996660  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:32.027172  445201 cri.go:89] found id: ""
	I1026 09:20:32.027250  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.027273  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:32.027285  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:32.027359  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:32.059677  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:32.059698  445201 cri.go:89] found id: ""
	I1026 09:20:32.059706  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:32.059760  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:32.063552  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:32.063625  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:32.093636  445201 cri.go:89] found id: ""
	I1026 09:20:32.093706  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.093729  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:32.093751  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:32.093848  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:32.121601  445201 cri.go:89] found id: ""
	I1026 09:20:32.121669  445201 logs.go:282] 0 containers: []
	W1026 09:20:32.121692  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:32.121722  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:32.121760  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:32.138320  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:32.138405  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:32.205854  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:32.205920  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:32.205945  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:32.241239  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:32.241333  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:32.327770  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:32.327807  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:32.363622  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:32.363652  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:32.404634  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:32.404662  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:32.437988  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:32.438017  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:32.654603  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:32.654674  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:32.764869  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:32.764909  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:35.352135  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:35.363548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:35.363620  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:35.392085  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:35.392111  445201 cri.go:89] found id: ""
	I1026 09:20:35.392120  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:35.392178  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.396199  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:35.396276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:35.424720  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:35.424744  445201 cri.go:89] found id: ""
	I1026 09:20:35.424753  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:35.424810  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.428788  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:35.428888  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:35.456996  445201 cri.go:89] found id: ""
	I1026 09:20:35.457024  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.457033  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:35.457040  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:35.457147  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:35.484260  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:35.484285  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:35.484290  445201 cri.go:89] found id: ""
	I1026 09:20:35.484298  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:35.484377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.488377  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.492137  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:35.492223  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:35.530290  445201 cri.go:89] found id: ""
	I1026 09:20:35.530318  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.530327  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:35.530333  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:35.530395  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:35.560296  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:35.560320  445201 cri.go:89] found id: ""
	I1026 09:20:35.560328  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:35.560383  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:35.564258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:35.564335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:35.597504  445201 cri.go:89] found id: ""
	I1026 09:20:35.597532  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.597551  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:35.597558  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:35.597620  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:35.627116  445201 cri.go:89] found id: ""
	I1026 09:20:35.627143  445201 logs.go:282] 0 containers: []
	W1026 09:20:35.627152  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:35.627167  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:35.627179  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:35.644287  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:35.644317  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:35.733109  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:35.733149  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:35.808772  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:35.808809  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:35.841387  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:35.841415  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:35.873263  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:35.873292  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:36.061479  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:36.061520  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:36.135658  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:36.135696  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:36.135726  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:36.186430  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:36.186609  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:36.214658  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:36.214688  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:38.795192  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:38.806691  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:38.806787  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:38.834339  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:38.834360  445201 cri.go:89] found id: ""
	I1026 09:20:38.834369  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:38.834426  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.838346  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:38.838430  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:38.866618  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:38.866638  445201 cri.go:89] found id: ""
	I1026 09:20:38.866646  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:38.866705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.870616  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:38.870689  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:38.898034  445201 cri.go:89] found id: ""
	I1026 09:20:38.898059  445201 logs.go:282] 0 containers: []
	W1026 09:20:38.898068  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:38.898075  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:38.898133  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:38.926259  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:38.926281  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:38.926286  445201 cri.go:89] found id: ""
	I1026 09:20:38.926295  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:38.926349  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.930561  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.934502  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:38.934577  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:38.961668  445201 cri.go:89] found id: ""
	I1026 09:20:38.961695  445201 logs.go:282] 0 containers: []
	W1026 09:20:38.961704  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:38.961711  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:38.961782  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:38.989873  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:38.989898  445201 cri.go:89] found id: ""
	I1026 09:20:38.989907  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:38.989968  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:38.993821  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:38.993894  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:39.021827  445201 cri.go:89] found id: ""
	I1026 09:20:39.021860  445201 logs.go:282] 0 containers: []
	W1026 09:20:39.021873  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:39.021881  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:39.021959  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:39.047487  445201 cri.go:89] found id: ""
	I1026 09:20:39.047510  445201 logs.go:282] 0 containers: []
	W1026 09:20:39.047518  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:39.047533  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:39.047544  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:39.235619  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:39.235656  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:39.313109  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:39.313130  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:39.313143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:39.394391  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:39.394430  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:39.482870  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:39.482920  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:39.526396  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:39.526486  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:39.543665  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:39.543693  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:39.648571  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:39.648615  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:39.685280  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:39.685320  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:39.723214  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:39.723246  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.254876  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:42.267740  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:42.267847  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:42.297640  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:42.297709  445201 cri.go:89] found id: ""
	I1026 09:20:42.297733  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:42.297795  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.301950  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:42.302025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:42.332906  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:42.332979  445201 cri.go:89] found id: ""
	I1026 09:20:42.333002  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:42.333094  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.337360  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:42.337477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:42.369545  445201 cri.go:89] found id: ""
	I1026 09:20:42.369571  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.369580  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:42.369586  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:42.369643  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:42.400994  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:42.401018  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:42.401023  445201 cri.go:89] found id: ""
	I1026 09:20:42.401030  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:42.401085  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.405385  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.408969  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:42.409057  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:42.437013  445201 cri.go:89] found id: ""
	I1026 09:20:42.437078  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.437101  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:42.437125  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:42.437201  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:42.469295  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.469316  445201 cri.go:89] found id: ""
	I1026 09:20:42.469324  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:42.469400  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:42.473224  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:42.473298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:42.509254  445201 cri.go:89] found id: ""
	I1026 09:20:42.509280  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.509289  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:42.509295  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:42.509352  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:42.536687  445201 cri.go:89] found id: ""
	I1026 09:20:42.536710  445201 logs.go:282] 0 containers: []
	W1026 09:20:42.536720  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:42.536733  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:42.536743  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:42.567830  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:42.567857  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:42.657078  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:42.657117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:42.738369  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:42.738407  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:42.765459  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:42.765488  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:42.793003  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:42.793032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:42.874464  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:42.874500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:43.074563  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:43.074602  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:43.091834  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:43.091868  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:43.168463  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:43.168488  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:43.168529  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:45.716953  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:45.727823  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:45.727901  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:45.754838  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:45.754862  445201 cri.go:89] found id: ""
	I1026 09:20:45.754871  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:45.754935  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.758953  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:45.759048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:45.786578  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:45.786611  445201 cri.go:89] found id: ""
	I1026 09:20:45.786620  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:45.786677  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.790410  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:45.790484  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:45.817100  445201 cri.go:89] found id: ""
	I1026 09:20:45.817125  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.817134  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:45.817140  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:45.817195  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:45.844261  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:45.844284  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:45.844288  445201 cri.go:89] found id: ""
	I1026 09:20:45.844296  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:45.844352  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.848186  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.851653  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:45.851724  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:45.877419  445201 cri.go:89] found id: ""
	I1026 09:20:45.877444  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.877453  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:45.877459  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:45.877563  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:45.903685  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:45.903749  445201 cri.go:89] found id: ""
	I1026 09:20:45.903770  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:45.903841  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:45.907749  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:45.907835  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:45.935133  445201 cri.go:89] found id: ""
	I1026 09:20:45.935199  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.935220  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:45.935235  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:45.935313  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:45.963149  445201 cri.go:89] found id: ""
	I1026 09:20:45.963222  445201 logs.go:282] 0 containers: []
	W1026 09:20:45.963244  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:45.963276  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:45.963312  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:46.154397  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:46.154434  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:46.173037  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:46.173076  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:46.248971  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:46.249042  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:46.249068  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:46.289852  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:46.289882  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:46.316390  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:46.316425  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:46.401774  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:46.401823  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:46.440602  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:46.440633  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:46.536038  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:46.536075  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:46.611345  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:46.611389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.149727  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:49.161338  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:49.161429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:49.189053  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:49.189077  445201 cri.go:89] found id: ""
	I1026 09:20:49.189085  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:49.189156  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.193041  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:49.193123  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:49.222571  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:49.222596  445201 cri.go:89] found id: ""
	I1026 09:20:49.222605  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:49.222663  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.226532  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:49.226607  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:49.258470  445201 cri.go:89] found id: ""
	I1026 09:20:49.258495  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.258503  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:49.258509  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:49.258565  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:49.285929  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:49.285953  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.285958  445201 cri.go:89] found id: ""
	I1026 09:20:49.285966  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:49.286021  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.289772  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.293274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:49.293397  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:49.320933  445201 cri.go:89] found id: ""
	I1026 09:20:49.320957  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.320966  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:49.320985  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:49.321048  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:49.347746  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:49.347770  445201 cri.go:89] found id: ""
	I1026 09:20:49.347784  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:49.347843  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:49.351610  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:49.351682  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:49.384317  445201 cri.go:89] found id: ""
	I1026 09:20:49.384343  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.384352  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:49.384358  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:49.384417  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:49.410794  445201 cri.go:89] found id: ""
	I1026 09:20:49.410819  445201 logs.go:282] 0 containers: []
	W1026 09:20:49.410828  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:49.410843  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:49.410855  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:49.484572  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:49.484598  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:49.484611  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:49.578246  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:49.578283  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:49.617058  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:49.617098  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:49.708347  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:49.708386  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:49.737503  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:49.737539  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:49.773301  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:49.773335  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:49.858035  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:49.858075  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:50.062201  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:50.062240  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:50.079457  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:50.079491  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:52.620387  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:52.631696  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:52.631768  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:52.662257  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:52.662280  445201 cri.go:89] found id: ""
	I1026 09:20:52.662288  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:52.662341  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.666304  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:52.666376  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:52.692937  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:52.693003  445201 cri.go:89] found id: ""
	I1026 09:20:52.693025  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:52.693111  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.696900  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:52.696968  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:52.722816  445201 cri.go:89] found id: ""
	I1026 09:20:52.722849  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.722859  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:52.722865  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:52.722919  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:52.749987  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:52.750011  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:52.750017  445201 cri.go:89] found id: ""
	I1026 09:20:52.750024  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:52.750078  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.753715  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.757282  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:52.757350  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:52.786064  445201 cri.go:89] found id: ""
	I1026 09:20:52.786143  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.786167  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:52.786192  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:52.786286  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:52.813518  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:52.813544  445201 cri.go:89] found id: ""
	I1026 09:20:52.813553  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:52.813610  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:52.817548  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:52.817623  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:52.851278  445201 cri.go:89] found id: ""
	I1026 09:20:52.851307  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.851315  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:52.851322  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:52.851382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:52.876624  445201 cri.go:89] found id: ""
	I1026 09:20:52.876699  445201 logs.go:282] 0 containers: []
	W1026 09:20:52.876722  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:52.876743  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:52.876769  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:52.911672  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:52.911705  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:53.004812  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:53.004858  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:53.204975  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:53.205013  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:53.293863  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:53.293898  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:53.384132  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:53.384173  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:53.415787  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:53.415819  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:53.445484  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:53.445517  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:53.490785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:53.490818  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:53.508703  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:53.508789  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:53.581038  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:56.082162  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:56.093503  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:56.093607  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:56.121809  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:56.121828  445201 cri.go:89] found id: ""
	I1026 09:20:56.121836  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:56.121913  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.125763  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:56.125865  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:56.151639  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:56.151664  445201 cri.go:89] found id: ""
	I1026 09:20:56.151673  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:56.151798  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.156294  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:56.156429  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:56.186858  445201 cri.go:89] found id: ""
	I1026 09:20:56.186885  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.186894  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:56.186900  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:56.186980  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:56.212611  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:56.212635  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:56.212640  445201 cri.go:89] found id: ""
	I1026 09:20:56.212647  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:56.212705  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.216976  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.220594  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:56.220669  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:56.250514  445201 cri.go:89] found id: ""
	I1026 09:20:56.250590  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.250614  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:56.250636  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:56.250762  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:56.278227  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:56.278302  445201 cri.go:89] found id: ""
	I1026 09:20:56.278326  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:56.278413  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:56.282569  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:56.282705  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:56.309251  445201 cri.go:89] found id: ""
	I1026 09:20:56.309327  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.309350  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:56.309373  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:56.309465  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:56.336184  445201 cri.go:89] found id: ""
	I1026 09:20:56.336254  445201 logs.go:282] 0 containers: []
	W1026 09:20:56.336275  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:56.336307  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:20:56.336344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:20:56.421731  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:20:56.421769  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:20:56.452283  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:56.452357  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:56.657837  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:56.657882  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:56.675344  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:20:56.675374  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:20:56.751910  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:20:56.751931  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:56.751943  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:56.839332  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:20:56.839375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:56.875572  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:20:56.875605  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:56.952770  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:20:56.952810  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:56.983110  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:20:56.983141  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:59.513559  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:20:59.526258  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:20:59.526329  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:20:59.552836  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:20:59.552860  445201 cri.go:89] found id: ""
	I1026 09:20:59.552869  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:20:59.552925  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.556827  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:20:59.556906  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:20:59.591057  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:20:59.591077  445201 cri.go:89] found id: ""
	I1026 09:20:59.591085  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:20:59.591141  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.595063  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:20:59.595138  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:20:59.622673  445201 cri.go:89] found id: ""
	I1026 09:20:59.622701  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.622736  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:20:59.622745  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:20:59.622801  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:20:59.651306  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:20:59.651330  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:20:59.651336  445201 cri.go:89] found id: ""
	I1026 09:20:59.651355  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:20:59.651432  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.655121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.658855  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:20:59.658960  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:20:59.685400  445201 cri.go:89] found id: ""
	I1026 09:20:59.685426  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.685436  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:20:59.685442  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:20:59.685498  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:20:59.712619  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:20:59.712643  445201 cri.go:89] found id: ""
	I1026 09:20:59.712651  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:20:59.712710  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:20:59.716925  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:20:59.717025  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:20:59.752250  445201 cri.go:89] found id: ""
	I1026 09:20:59.752274  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.752283  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:20:59.752289  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:20:59.752358  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:20:59.778971  445201 cri.go:89] found id: ""
	I1026 09:20:59.778994  445201 logs.go:282] 0 containers: []
	W1026 09:20:59.779004  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:20:59.779019  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:20:59.779033  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:20:59.972175  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:20:59.972210  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:20:59.988939  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:20:59.988971  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:00.269861  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:00.269899  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:00.328255  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:00.328296  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:00.451120  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:00.451169  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:00.488800  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:00.488955  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:00.592074  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:00.592097  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:00.592112  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:00.623110  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:00.623147  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:00.707786  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:00.707828  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:03.247930  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:03.261785  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:03.261879  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:03.291325  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:03.291349  445201 cri.go:89] found id: ""
	I1026 09:21:03.291358  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:03.291416  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.295568  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:03.295641  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:03.323403  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:03.323423  445201 cri.go:89] found id: ""
	I1026 09:21:03.323432  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:03.323489  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.327333  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:03.327406  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:03.354897  445201 cri.go:89] found id: ""
	I1026 09:21:03.354921  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.354935  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:03.354942  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:03.355003  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:03.387726  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:03.387803  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:03.387823  445201 cri.go:89] found id: ""
	I1026 09:21:03.387847  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:03.387920  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.392298  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.396190  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:03.396308  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:03.424850  445201 cri.go:89] found id: ""
	I1026 09:21:03.424875  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.424884  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:03.424890  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:03.424969  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:03.453335  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:03.453360  445201 cri.go:89] found id: ""
	I1026 09:21:03.453369  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:03.453472  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:03.457581  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:03.457675  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:03.486882  445201 cri.go:89] found id: ""
	I1026 09:21:03.486954  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.486977  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:03.486999  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:03.487104  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:03.529823  445201 cri.go:89] found id: ""
	I1026 09:21:03.529848  445201 logs.go:282] 0 containers: []
	W1026 09:21:03.529858  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:03.529893  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:03.529922  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:03.740698  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:03.740737  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:03.763204  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:03.763234  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:03.837225  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:03.837243  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:03.837256  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:03.926755  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:03.926793  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:03.956255  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:03.956282  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:03.983359  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:03.983392  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:04.018225  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:04.018254  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:04.062731  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:04.062761  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:04.143784  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:04.143824  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:06.728492  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:06.740045  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:06.740162  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:06.767428  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:06.767450  445201 cri.go:89] found id: ""
	I1026 09:21:06.767458  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:06.767515  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.771160  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:06.771294  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:06.798465  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:06.798489  445201 cri.go:89] found id: ""
	I1026 09:21:06.798498  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:06.798574  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.803242  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:06.803327  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:06.829872  445201 cri.go:89] found id: ""
	I1026 09:21:06.829900  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.829909  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:06.829915  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:06.829978  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:06.862567  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:06.862598  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:06.862607  445201 cri.go:89] found id: ""
	I1026 09:21:06.862619  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:06.862686  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.866827  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.870490  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:06.870561  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:06.898277  445201 cri.go:89] found id: ""
	I1026 09:21:06.898306  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.898314  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:06.898321  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:06.898379  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:06.925628  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:06.925653  445201 cri.go:89] found id: ""
	I1026 09:21:06.925661  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:06.925717  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:06.929528  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:06.929597  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:06.960631  445201 cri.go:89] found id: ""
	I1026 09:21:06.960697  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.960719  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:06.960733  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:06.960809  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:06.990323  445201 cri.go:89] found id: ""
	I1026 09:21:06.990349  445201 logs.go:282] 0 containers: []
	W1026 09:21:06.990358  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:06.990390  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:06.990410  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:07.184993  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:07.185029  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:07.201638  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:07.201678  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:07.290033  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:07.290071  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:07.331739  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:07.331775  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:07.410118  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:07.410155  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:07.444128  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:07.444158  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:07.530207  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:07.530274  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:07.530301  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:07.563028  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:07.563063  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:07.591338  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:07.591364  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:10.178821  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:10.190222  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:10.190298  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:10.227875  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:10.227897  445201 cri.go:89] found id: ""
	I1026 09:21:10.227906  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:10.227964  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.231746  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:10.231821  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:10.260172  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:10.260192  445201 cri.go:89] found id: ""
	I1026 09:21:10.260200  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:10.260270  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.264202  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:10.264276  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:10.291339  445201 cri.go:89] found id: ""
	I1026 09:21:10.291367  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.291377  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:10.291383  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:10.291441  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:10.318497  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:10.318520  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:10.318525  445201 cri.go:89] found id: ""
	I1026 09:21:10.318532  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:10.318590  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.322446  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.326370  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:10.326464  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:10.354158  445201 cri.go:89] found id: ""
	I1026 09:21:10.354181  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.354191  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:10.354197  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:10.354254  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:10.389289  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:10.389313  445201 cri.go:89] found id: ""
	I1026 09:21:10.389321  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:10.389373  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:10.393257  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:10.393338  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:10.420691  445201 cri.go:89] found id: ""
	I1026 09:21:10.420725  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.420733  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:10.420770  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:10.420851  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:10.447235  445201 cri.go:89] found id: ""
	I1026 09:21:10.447258  445201 logs.go:282] 0 containers: []
	W1026 09:21:10.447267  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:10.447300  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:10.447316  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:10.463955  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:10.463983  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:10.547364  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:10.547386  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:10.547400  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:10.638440  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:10.638477  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:10.667342  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:10.667375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:10.880045  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:10.880083  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:10.917355  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:10.917389  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:10.994430  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:10.994468  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:11.025132  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:11.025199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:11.106492  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:11.106530  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:13.646984  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:13.658294  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:13.658365  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:13.686138  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:13.686159  445201 cri.go:89] found id: ""
	I1026 09:21:13.686166  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:13.686221  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.689904  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:13.689975  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:13.717871  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:13.717900  445201 cri.go:89] found id: ""
	I1026 09:21:13.717909  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:13.717964  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.721862  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:13.721934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:13.748415  445201 cri.go:89] found id: ""
	I1026 09:21:13.748446  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.748454  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:13.748460  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:13.748521  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:13.782153  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:13.782172  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:13.782177  445201 cri.go:89] found id: ""
	I1026 09:21:13.782184  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:13.782242  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.786369  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.790770  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:13.790856  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:13.819605  445201 cri.go:89] found id: ""
	I1026 09:21:13.819633  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.819641  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:13.819647  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:13.819759  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:13.848568  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:13.848595  445201 cri.go:89] found id: ""
	I1026 09:21:13.848604  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:13.848722  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:13.852510  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:13.852595  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:13.883059  445201 cri.go:89] found id: ""
	I1026 09:21:13.883086  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.883096  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:13.883102  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:13.883159  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:13.909300  445201 cri.go:89] found id: ""
	I1026 09:21:13.909326  445201 logs.go:282] 0 containers: []
	W1026 09:21:13.909341  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:13.909356  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:13.909372  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:14.101663  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:14.101702  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:14.148245  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:14.148275  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:14.236589  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:14.236626  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:14.268611  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:14.268642  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:14.298171  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:14.298199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:14.378289  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:14.378327  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:14.411068  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:14.411100  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:14.429706  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:14.429738  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:14.511924  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:14.511945  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:14.511959  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.102246  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:17.114188  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:17.114253  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:17.142450  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.142471  445201 cri.go:89] found id: ""
	I1026 09:21:17.142479  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:17.142535  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.146640  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:17.146757  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:17.172790  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:17.172813  445201 cri.go:89] found id: ""
	I1026 09:21:17.172822  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:17.172879  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.176741  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:17.176813  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:17.204125  445201 cri.go:89] found id: ""
	I1026 09:21:17.204151  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.204160  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:17.204166  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:17.204227  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:17.230852  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:17.230875  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:17.230880  445201 cri.go:89] found id: ""
	I1026 09:21:17.230888  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:17.230943  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.234869  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.238552  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:17.238641  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:17.269934  445201 cri.go:89] found id: ""
	I1026 09:21:17.269960  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.269970  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:17.269977  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:17.270036  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:17.296616  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:17.296654  445201 cri.go:89] found id: ""
	I1026 09:21:17.296664  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:17.296735  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:17.300505  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:17.300579  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:17.327307  445201 cri.go:89] found id: ""
	I1026 09:21:17.327332  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.327340  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:17.327347  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:17.327409  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:17.363904  445201 cri.go:89] found id: ""
	I1026 09:21:17.363930  445201 logs.go:282] 0 containers: []
	W1026 09:21:17.363939  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:17.363952  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:17.363972  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:17.406846  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:17.406876  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:17.609584  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:17.609620  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:17.698700  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:17.698741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:17.736430  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:17.736469  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:17.767180  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:17.767207  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:17.849128  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:17.849163  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:17.880031  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:17.880059  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:17.896355  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:17.896384  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:17.970628  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:17.970690  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:17.970778  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:20.550647  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:20.561701  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:20.561785  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:20.590001  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:20.590020  445201 cri.go:89] found id: ""
	I1026 09:21:20.590028  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:20.590082  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.593813  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:20.593882  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:20.626195  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:20.626219  445201 cri.go:89] found id: ""
	I1026 09:21:20.626228  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:20.626282  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.630016  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:20.630089  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:20.660378  445201 cri.go:89] found id: ""
	I1026 09:21:20.660414  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.660423  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:20.660430  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:20.660509  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:20.687394  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:20.687416  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:20.687421  445201 cri.go:89] found id: ""
	I1026 09:21:20.687429  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:20.687484  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.691121  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.694770  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:20.694839  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:20.721137  445201 cri.go:89] found id: ""
	I1026 09:21:20.721163  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.721172  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:20.721179  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:20.721240  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:20.747340  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:20.747360  445201 cri.go:89] found id: ""
	I1026 09:21:20.747368  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:20.747430  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:20.751174  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:20.751287  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:20.777016  445201 cri.go:89] found id: ""
	I1026 09:21:20.777043  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.777052  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:20.777059  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:20.777137  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:20.804106  445201 cri.go:89] found id: ""
	I1026 09:21:20.804130  445201 logs.go:282] 0 containers: []
	W1026 09:21:20.804145  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:20.804159  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:20.804170  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:20.829056  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:20.829083  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:20.859258  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:20.859285  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:21.057262  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:21.057314  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:21.098469  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:21.098501  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:21.178683  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:21.178821  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:21.267215  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:21.267252  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:21.284825  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:21.284855  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:21.355190  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:21.355210  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:21.355222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:21.449675  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:21.449715  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:23.979650  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:23.990886  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:23.990994  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:24.022185  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:24.022205  445201 cri.go:89] found id: ""
	I1026 09:21:24.022212  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:24.022295  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.026431  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:24.026556  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:24.056183  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:24.056206  445201 cri.go:89] found id: ""
	I1026 09:21:24.056215  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:24.056290  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.060036  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:24.060114  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:24.089520  445201 cri.go:89] found id: ""
	I1026 09:21:24.089548  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.089558  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:24.089564  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:24.089622  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:24.119130  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:24.119151  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:24.119156  445201 cri.go:89] found id: ""
	I1026 09:21:24.119164  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:24.119220  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.122899  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.126615  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:24.126690  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:24.154462  445201 cri.go:89] found id: ""
	I1026 09:21:24.154485  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.154502  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:24.154510  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:24.154569  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:24.182057  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:24.182123  445201 cri.go:89] found id: ""
	I1026 09:21:24.182145  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:24.182238  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:24.186279  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:24.186401  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:24.213464  445201 cri.go:89] found id: ""
	I1026 09:21:24.213530  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.213555  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:24.213577  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:24.213659  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:24.243102  445201 cri.go:89] found id: ""
	I1026 09:21:24.243129  445201 logs.go:282] 0 containers: []
	W1026 09:21:24.243138  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:24.243153  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:24.243164  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:24.443606  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:24.443648  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:24.462087  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:24.462117  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:24.538553  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:24.538574  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:24.538588  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:24.638803  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:24.638843  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:24.685028  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:24.685062  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:24.782516  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:24.782556  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:24.812833  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:24.812902  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:24.844661  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:24.844689  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:24.874272  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:24.874302  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:27.459319  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:27.471232  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:21:27.471319  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:21:27.505082  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:27.505104  445201 cri.go:89] found id: ""
	I1026 09:21:27.505112  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:21:27.505205  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.511716  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:21:27.511848  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:21:27.539424  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:27.539495  445201 cri.go:89] found id: ""
	I1026 09:21:27.539518  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:21:27.539604  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.543429  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:21:27.543501  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:21:27.568844  445201 cri.go:89] found id: ""
	I1026 09:21:27.568869  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.568878  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:21:27.568884  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:21:27.568940  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:21:27.594428  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:27.594448  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:27.594453  445201 cri.go:89] found id: ""
	I1026 09:21:27.594461  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:21:27.594513  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.598153  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.601518  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:21:27.601583  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:21:27.630965  445201 cri.go:89] found id: ""
	I1026 09:21:27.630992  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.631001  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:21:27.631014  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:21:27.631070  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:21:27.657171  445201 cri.go:89] found id: "59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:27.657192  445201 cri.go:89] found id: ""
	I1026 09:21:27.657201  445201 logs.go:282] 1 containers: [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c]
	I1026 09:21:27.657259  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:21:27.660934  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:21:27.661026  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:21:27.688105  445201 cri.go:89] found id: ""
	I1026 09:21:27.688130  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.688139  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:21:27.688145  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:21:27.688202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:21:27.714913  445201 cri.go:89] found id: ""
	I1026 09:21:27.714940  445201 logs.go:282] 0 containers: []
	W1026 09:21:27.714948  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:21:27.714963  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:21:27.714977  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:21:27.742801  445201 logs.go:123] Gathering logs for kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] ...
	I1026 09:21:27.742830  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	I1026 09:21:27.773812  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:21:27.773840  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:21:27.855060  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:21:27.855097  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:21:27.890187  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:21:27.890217  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:21:28.098021  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:21:28.098058  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:21:28.186518  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:21:28.186596  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:21:28.264677  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:21:28.264714  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:21:28.282938  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:21:28.282969  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:21:28.348772  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:21:28.348792  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:21:28.348805  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:21:30.895244  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:21:30.910800  445201 out.go:203] 
	W1026 09:21:30.913672  445201 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1026 09:21:30.913714  445201 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1026 09:21:30.913725  445201 out.go:285] * Related issues:
	W1026 09:21:30.913740  445201 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1026 09:21:30.913752  445201 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1026 09:21:30.916625  445201 out.go:203] 
	
	
	==> CRI-O <==
	Oct 26 09:20:39 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:39.373301913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=210496ff-577a-4ff8-b013-4ceb566611bf name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:20:39 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:39.37995076Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=14642ab3-2ba6-4455-a98b-51abdaae42dd name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:20:39 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:39.381941111Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-275732/kube-apiserver" id=a547dc00-c1cc-46f8-a01c-ee163edf8509 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:20:39 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:39.382100538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:20:39 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:39.389563245Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1" id=a547dc00-c1cc-46f8-a01c-ee163edf8509 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:20:51 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:51.373516382Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=113cfa5c-45d0-41ef-a60b-8c449f093190 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:20:51 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:51.374834814Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1de27940-dd84-49a7-9896-c941856556ce name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:20:51 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:51.375892485Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-275732/kube-apiserver" id=3d5a66c8-6d70-4225-b47c-0bd35d37819f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:20:51 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:51.375991744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:20:51 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:20:51.380225134Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1" id=3d5a66c8-6d70-4225-b47c-0bd35d37819f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:05 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:05.373281372Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e88a6190-d3b3-46a6-b86f-ca49eb0d7151 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:05 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:05.374763097Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=601404ac-bbae-469e-bfdf-c2ede40d67f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:05 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:05.375921085Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-275732/kube-apiserver" id=e02e6def-47dd-4d67-8933-16616598d980 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:05 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:05.376055741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:21:05 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:05.380140658Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1" id=e02e6def-47dd-4d67-8933-16616598d980 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:17 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:17.372150917Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=37057711-3202-484c-9817-d0cc24445931 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:17 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:17.380080191Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=50a4f1c7-9ba3-4b6b-8dba-0a4596f76988 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:17 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:17.381325614Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-275732/kube-apiserver" id=296edfef-c1bd-49ee-918f-2a185675b186 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:17 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:17.381445271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:21:17 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:17.385317606Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1" id=296edfef-c1bd-49ee-918f-2a185675b186 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:30 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:30.376888617Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c3c3b5cd-fdfa-4524-86eb-f6d9eb755a5a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:30 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:30.37876645Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=cfb0b585-957d-4f8a-b243-70dc3eabe45d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:21:30 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:30.380492429Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-275732/kube-apiserver" id=d51e90b6-e8da-4639-ae4f-8e74e54d6f81 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:21:30 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:30.380594198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:21:30 kubernetes-upgrade-275732 crio[1696]: time="2025-10-26T09:21:30.384946483Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1" id=d51e90b6-e8da-4639-ae4f-8e74e54d6f81 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	59a79c4173ba4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Exited              kube-controller-manager   5                   98fb67a16dd18       kube-controller-manager-kubernetes-upgrade-275732   kube-system
	de32875ececb4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      0                   40fc31521a281       etcd-kubernetes-upgrade-275732                      kube-system
	d400e8b9f0ae1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   13cbd52f16d0e       kube-scheduler-kubernetes-upgrade-275732            kube-system
	a0bdcee41ce10       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Exited              kube-scheduler            1                   13cbd52f16d0e       kube-scheduler-kubernetes-upgrade-275732            kube-system
	68f6053e321f9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            0                   4adbfdead37de       kube-apiserver-kubernetes-upgrade-275732            kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct26 08:51] overlayfs: idmapped layers are currently not supported
	[Oct26 08:52] overlayfs: idmapped layers are currently not supported
	[ +49.561224] hrtimer: interrupt took 37499666 ns
	[Oct26 08:53] overlayfs: idmapped layers are currently not supported
	[Oct26 08:58] overlayfs: idmapped layers are currently not supported
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] <==
	{"level":"info","ts":"2025-10-26T09:15:29.501868Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-26T09:15:29.501921Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-26T09:15:29.501995Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T09:15:29.502007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-26T09:15:29.502022Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2025-10-26T09:15:29.503141Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2025-10-26T09:15:29.503182Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-26T09:15:29.503211Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2025-10-26T09:15:29.503223Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2025-10-26T09:15:29.504502Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-275732 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T09:15:29.504618Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:15:29.504843Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-10-26T09:15:29.504977Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:15:29.505184Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T09:15:29.505205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T09:15:29.506822Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T09:15:29.509429Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T09:15:29.509823Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-10-26T09:15:29.509990Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-26T09:15:29.510044Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-26T09:15:29.510119Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-10-26T09:15:29.510184Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"warn","ts":"2025-10-26T09:15:29.512757Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-26T09:15:29.514860Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T09:15:29.520215Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:21:33 up  3:04,  0 user,  load average: 0.76, 2.46, 2.50
	Linux kubernetes-upgrade-275732 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] <==
	E1026 09:13:52.377521       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.377824       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-26T09:13:52.377851Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1026 09:13:52.377908       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 66.84µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1026 09:13:52.378048       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-26T09:13:52.378237Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1026 09:13:52.378273       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E1026 09:13:52.378560       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.378961       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.378980       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.197633ms" method="GET" path="/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer" result=null
	E1026 09:13:52.379121       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.379407       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-26T09:13:52.379434Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1026 09:13:52.379486       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 62.031µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1026 09:13:52.379697       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.379730       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.315µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1026 09:13:52.379743       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.379841       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.39281ms" method="GET" path="/api/v1/namespaces/kube-system/serviceaccounts/clusterrole-aggregation-controller" result=null
	E1026 09:13:52.379866       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.386963       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.387013       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.387057       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.387074       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.387091       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1026 09:13:52.387214       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="9.511451ms" method="POST" path="/api/v1/namespaces/kube-system/serviceaccounts/service-cidrs-controller/token" result=null
	
	
	==> kube-controller-manager [59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c] <==
	I1026 09:18:51.469051       1 serving.go:386] Generated self-signed cert in-memory
	I1026 09:18:53.754496       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1026 09:18:53.757485       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:18:53.759841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1026 09:18:53.759952       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:18:53.760004       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 09:18:53.760014       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1026 09:19:03.762319       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": dial tcp 192.168.76.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] <==
	I1026 09:13:55.269003       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:13:56.296756       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.76.2:8443: connect: connection refused
	W1026 09:13:56.296796       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:13:56.296804       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:13:56.307748       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:13:56.307786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:13:56.315059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1026 09:13:56.315170       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1026 09:13:56.315836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:13:56.315863       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1026 09:13:56.315878       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:13:56.315887       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:13:56.315904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:13:56.315913       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 09:13:56.316003       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 09:13:56.316016       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 09:13:56.316021       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 09:13:56.316035       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] <==
	E1026 09:20:34.535949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:20:35.621523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:20:37.868219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.76.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:20:41.621176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:20:43.131371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:20:46.752454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:20:47.905990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.76.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:20:48.092298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:20:48.548893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 09:20:53.049002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:20:57.573846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:20:58.495191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.76.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:21:00.726047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:21:04.203438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:21:04.340814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:21:07.126655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:21:09.659508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:21:11.229410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:21:13.971280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:21:16.537670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:21:18.766324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:21:23.776203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.76.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:21:28.619530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:21:28.674325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:21:30.273672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.76.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	
	
	==> kubelet <==
	Oct 26 09:21:20 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:20.373126     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6bc6b21d19f34cb3d7ff32e78fb91e8c" pod="kube-system/etcd-kubernetes-upgrade-275732"
	Oct 26 09:21:20 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:20.373438     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c4c2ef496cef817a580cdd7032489a6c" pod="kube-system/kube-apiserver-kubernetes-upgrade-275732"
	Oct 26 09:21:20 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:20.373833     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e36d7c2f81e3fbdc62c5de8770e15e23" pod="kube-system/kube-controller-manager-kubernetes-upgrade-275732"
	Oct 26 09:21:20 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:20.600448     958 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 26 09:21:22 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:22.690350     958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.687325     958 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-26T09:21:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-26T09:21:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-26T09:21:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-26T09:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\"
:[\\\"registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5\\\",\\\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\\\",\\\"registry.k8s.io/etcd:3.6.4-0\\\"],\\\"sizeBytes\\\":205987068},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3\\\",\\\"registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b\\\",\\\"registry.k8s.io/etcd:3.5.9-0\\\"],\\\"sizeBytes\\\":182203183},{\\\"names\\\":[\\\"docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a\\\",\\\"docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1\\\",\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\"],\\\"sizeBytes\\\":111333938},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\\\",\\\"registry.k8s.io/kube-ap
iserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645\\\",\\\"registry.k8s.io/kube-apiserver:v1.34.1\\\"],\\\"sizeBytes\\\":84753391},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6\\\",\\\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\\\",\\\"registry.k8s.io/kube-proxy:v1.34.1\\\"],\\\"sizeBytes\\\":75938711},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789\\\",\\\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.1\\\"],\\\"sizeBytes\\\":73195387},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f\\\",\\\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce
303e89\\\",\\\"registry.k8s.io/kube-controller-manager:v1.34.1\\\"],\\\"sizeBytes\\\":72629077},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\\\",\\\"registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e\\\",\\\"registry.k8s.io/kube-scheduler:v1.34.1\\\"],\\\"sizeBytes\\\":51592017},{\\\"names\\\":[\\\"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2\\\",\\\"gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944\\\",\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\"],\\\"sizeBytes\\\":29037500},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":51
9884}]}}\" for node \"kubernetes-upgrade-275732\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-275732/status?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.687573     958 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-275732\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.687756     958 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-275732\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.688031     958 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-275732\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.688261     958 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-275732\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 26 09:21:24 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:24.688277     958 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Oct 26 09:21:25 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:25.601874     958 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 26 09:21:28 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:28.014867     958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-275732.1871ffa44e3fc5f4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-275732,UID:kubernetes-upgrade-275732,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node kubernetes-upgrade-275732 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-275732,},FirstTimestamp:2025-10-26 09:13:30.431096308 +0000 UTC m=+0.214504193,LastTimestamp:2025-10-26 09:13:30.431096308 +0000 UTC m=+0.214504193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Reporting
Controller:kubelet,ReportingInstance:kubernetes-upgrade-275732,}"
	Oct 26 09:21:29 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:29.691959     958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-275732?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: I1026 09:21:30.372506     958 scope.go:117] "RemoveContainer" containerID="59a79c4173ba4831a9c6a9aa836904eaac87579fb920d60c25fc84e5f7e0602c"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.372849     958 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-275732_kube-system(e36d7c2f81e3fbdc62c5de8770e15e23)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-275732" podUID="e36d7c2f81e3fbdc62c5de8770e15e23"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: I1026 09:21:30.373466     958 scope.go:117] "RemoveContainer" containerID="68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.373918     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7798a409ac6b60b0e77f15a98491d0f4" pod="kube-system/kube-scheduler-kubernetes-upgrade-275732"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.374315     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6bc6b21d19f34cb3d7ff32e78fb91e8c" pod="kube-system/etcd-kubernetes-upgrade-275732"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.374645     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c4c2ef496cef817a580cdd7032489a6c" pod="kube-system/kube-apiserver-kubernetes-upgrade-275732"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.375013     958 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-275732\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e36d7c2f81e3fbdc62c5de8770e15e23" pod="kube-system/kube-controller-manager-kubernetes-upgrade-275732"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.388389     958 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1\" is already in use by 516f6796d421a1378d7460327c2b4080bbd6eadd6250be3871b203b15c17eba0. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="4adbfdead37deb8ac8c285d043f502520dc8d1657887dfc0ec67d6ddc837eefd"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.388495     958 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-apiserver start failed in pod kube-apiserver-kubernetes-upgrade-275732_kube-system(c4c2ef496cef817a580cdd7032489a6c): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1\" is already in use by 516f6796d421a1378d7460327c2b4080bbd6eadd6250be3871b203b15c17eba0. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.388531     958 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-275732_kube-system_c4c2ef496cef817a580cdd7032489a6c_1\\\" is already in use by 516f6796d421a1378d7460327c2b4080bbd6eadd6250be3871b203b15c17eba0. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-275732" podUID="c4c2ef496cef817a580cdd7032489a6c"
	Oct 26 09:21:30 kubernetes-upgrade-275732 kubelet[958]: E1026 09:21:30.603102     958 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-275732 -n kubernetes-upgrade-275732
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-275732 -n kubernetes-upgrade-275732: exit status 2 (347.626896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-275732" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-275732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-275732
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-275732: (2.148198103s)
--- FAIL: TestKubernetesUpgrade (551.52s)

                                                
                                    
x
+
TestPause/serial/Pause (6.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-827956 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-827956 --alsologtostderr -v=5: exit status 80 (1.821302041s)

                                                
                                                
-- stdout --
	* Pausing node pause-827956 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:16:51.545266  457068 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:16:51.546233  457068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:51.546273  457068 out.go:374] Setting ErrFile to fd 2...
	I1026 09:16:51.546295  457068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:51.546613  457068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:16:51.546971  457068 out.go:368] Setting JSON to false
	I1026 09:16:51.547038  457068 mustload.go:65] Loading cluster: pause-827956
	I1026 09:16:51.547509  457068 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:51.548078  457068 cli_runner.go:164] Run: docker container inspect pause-827956 --format={{.State.Status}}
	I1026 09:16:51.567056  457068 host.go:66] Checking if "pause-827956" exists ...
	I1026 09:16:51.567510  457068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:16:51.620278  457068 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:16:51.610520849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:16:51.620936  457068 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-827956 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:16:51.624404  457068 out.go:179] * Pausing node pause-827956 ... 
	I1026 09:16:51.627491  457068 host.go:66] Checking if "pause-827956" exists ...
	I1026 09:16:51.627866  457068 ssh_runner.go:195] Run: systemctl --version
	I1026 09:16:51.627915  457068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:51.644684  457068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:51.749172  457068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:16:51.762180  457068 pause.go:52] kubelet running: true
	I1026 09:16:51.762246  457068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:16:51.994056  457068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:16:51.994228  457068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:16:52.069456  457068 cri.go:89] found id: "abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0"
	I1026 09:16:52.069476  457068 cri.go:89] found id: "d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd"
	I1026 09:16:52.069482  457068 cri.go:89] found id: "ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7"
	I1026 09:16:52.069486  457068 cri.go:89] found id: "e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a"
	I1026 09:16:52.069489  457068 cri.go:89] found id: "ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656"
	I1026 09:16:52.069493  457068 cri.go:89] found id: "6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd"
	I1026 09:16:52.069496  457068 cri.go:89] found id: "12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2"
	I1026 09:16:52.069499  457068 cri.go:89] found id: "5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	I1026 09:16:52.069502  457068 cri.go:89] found id: "17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e"
	I1026 09:16:52.069508  457068 cri.go:89] found id: "e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9"
	I1026 09:16:52.069511  457068 cri.go:89] found id: "89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde"
	I1026 09:16:52.069514  457068 cri.go:89] found id: "4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	I1026 09:16:52.069518  457068 cri.go:89] found id: "37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d"
	I1026 09:16:52.069521  457068 cri.go:89] found id: "6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a"
	I1026 09:16:52.069524  457068 cri.go:89] found id: ""
	I1026 09:16:52.069577  457068 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:16:52.081307  457068 retry.go:31] will retry after 151.868491ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:16:52.233773  457068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:16:52.247072  457068 pause.go:52] kubelet running: false
	I1026 09:16:52.247164  457068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:16:52.397767  457068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:16:52.397856  457068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:16:52.473124  457068 cri.go:89] found id: "abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0"
	I1026 09:16:52.473149  457068 cri.go:89] found id: "d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd"
	I1026 09:16:52.473154  457068 cri.go:89] found id: "ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7"
	I1026 09:16:52.473170  457068 cri.go:89] found id: "e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a"
	I1026 09:16:52.473173  457068 cri.go:89] found id: "ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656"
	I1026 09:16:52.473177  457068 cri.go:89] found id: "6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd"
	I1026 09:16:52.473180  457068 cri.go:89] found id: "12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2"
	I1026 09:16:52.473182  457068 cri.go:89] found id: "5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	I1026 09:16:52.473185  457068 cri.go:89] found id: "17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e"
	I1026 09:16:52.473195  457068 cri.go:89] found id: "e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9"
	I1026 09:16:52.473199  457068 cri.go:89] found id: "89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde"
	I1026 09:16:52.473202  457068 cri.go:89] found id: "4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	I1026 09:16:52.473205  457068 cri.go:89] found id: "37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d"
	I1026 09:16:52.473208  457068 cri.go:89] found id: "6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a"
	I1026 09:16:52.473211  457068 cri.go:89] found id: ""
	I1026 09:16:52.473261  457068 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:16:52.484057  457068 retry.go:31] will retry after 480.411647ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:16:52.964700  457068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:16:52.979435  457068 pause.go:52] kubelet running: false
	I1026 09:16:52.979483  457068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:16:53.183694  457068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:16:53.183779  457068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:16:53.279964  457068 cri.go:89] found id: "abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0"
	I1026 09:16:53.279986  457068 cri.go:89] found id: "d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd"
	I1026 09:16:53.279991  457068 cri.go:89] found id: "ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7"
	I1026 09:16:53.279994  457068 cri.go:89] found id: "e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a"
	I1026 09:16:53.279998  457068 cri.go:89] found id: "ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656"
	I1026 09:16:53.280002  457068 cri.go:89] found id: "6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd"
	I1026 09:16:53.280005  457068 cri.go:89] found id: "12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2"
	I1026 09:16:53.280008  457068 cri.go:89] found id: "5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	I1026 09:16:53.280011  457068 cri.go:89] found id: "17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e"
	I1026 09:16:53.280018  457068 cri.go:89] found id: "e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9"
	I1026 09:16:53.280021  457068 cri.go:89] found id: "89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde"
	I1026 09:16:53.280024  457068 cri.go:89] found id: "4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	I1026 09:16:53.280027  457068 cri.go:89] found id: "37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d"
	I1026 09:16:53.280039  457068 cri.go:89] found id: "6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a"
	I1026 09:16:53.280042  457068 cri.go:89] found id: ""
	I1026 09:16:53.280091  457068 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:16:53.298802  457068 out.go:203] 
	W1026 09:16:53.301169  457068 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:16:53.301200  457068 out.go:285] * 
	* 
	W1026 09:16:53.308595  457068 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:16:53.311688  457068 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-827956 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-827956
helpers_test.go:243: (dbg) docker inspect pause-827956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2",
	        "Created": "2025-10-26T09:15:12.929228265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:15:12.990173576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/hosts",
	        "LogPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2-json.log",
	        "Name": "/pause-827956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-827956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-827956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2",
	                "LowerDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-827956",
	                "Source": "/var/lib/docker/volumes/pause-827956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-827956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-827956",
	                "name.minikube.sigs.k8s.io": "pause-827956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27e369101a86bf66b24f7d966d0744e5a6b5ab56d71b22ddbd6f64691224c7a8",
	            "SandboxKey": "/var/run/docker/netns/27e369101a86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-827956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:74:c3:5e:8f:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d6410f3b87c8c4b8e5c46e60c5d5fab80acf7e540d601214a0dae27ba51e762b",
	                    "EndpointID": "7f250ebb4d63599c57b4944879c26f1dfa5b7dc46dddd0f4e0671c1bf7d8019a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-827956",
	                        "e18cb5eb7e8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-827956 -n pause-827956
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-827956 -n pause-827956: exit status 2 (428.468547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-827956 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-827956 logs -n 25: (1.473588459s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:11 UTC │
	│ delete  │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:12 UTC │
	│ ssh     │ -p NoKubernetes-948910 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │                     │
	│ stop    │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p NoKubernetes-948910 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p missing-upgrade-019301 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-019301    │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:13 UTC │
	│ ssh     │ -p NoKubernetes-948910 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │                     │
	│ delete  │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:13 UTC │
	│ stop    │ -p kubernetes-upgrade-275732                                                                                                             │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ delete  │ -p missing-upgrade-019301                                                                                                                │ missing-upgrade-019301    │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p stopped-upgrade-017998 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-017998    │ jenkins │ v1.32.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │                     │
	│ stop    │ stopped-upgrade-017998 stop                                                                                                              │ stopped-upgrade-017998    │ jenkins │ v1.32.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p stopped-upgrade-017998 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-017998    │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:14 UTC │
	│ delete  │ -p stopped-upgrade-017998                                                                                                                │ stopped-upgrade-017998    │ jenkins │ v1.37.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:14 UTC │
	│ start   │ -p running-upgrade-931705 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-931705    │ jenkins │ v1.32.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:14 UTC │
	│ start   │ -p running-upgrade-931705 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-931705    │ jenkins │ v1.37.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:15 UTC │
	│ delete  │ -p running-upgrade-931705                                                                                                                │ running-upgrade-931705    │ jenkins │ v1.37.0 │ 26 Oct 25 09:15 UTC │ 26 Oct 25 09:15 UTC │
	│ start   │ -p pause-827956 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:15 UTC │ 26 Oct 25 09:16 UTC │
	│ start   │ -p pause-827956 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:16 UTC │ 26 Oct 25 09:16 UTC │
	│ pause   │ -p pause-827956 --alsologtostderr -v=5                                                                                                   │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:16:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:16:27.599366  455005 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:16:27.599540  455005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:27.599554  455005 out.go:374] Setting ErrFile to fd 2...
	I1026 09:16:27.599560  455005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:27.599829  455005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:16:27.600218  455005 out.go:368] Setting JSON to false
	I1026 09:16:27.601263  455005 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10738,"bootTime":1761459450,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:16:27.601336  455005 start.go:141] virtualization:  
	I1026 09:16:27.604360  455005 out.go:179] * [pause-827956] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:16:27.608259  455005 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:16:27.608378  455005 notify.go:220] Checking for updates...
	I1026 09:16:27.614334  455005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:16:27.617255  455005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:16:27.620111  455005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:16:27.622963  455005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:16:27.626412  455005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:16:27.629867  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:27.630437  455005 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:16:27.655308  455005 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:16:27.655435  455005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:16:27.722983  455005 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:16:27.712899439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:16:27.723095  455005 docker.go:318] overlay module found
	I1026 09:16:27.728153  455005 out.go:179] * Using the docker driver based on existing profile
	I1026 09:16:27.730965  455005 start.go:305] selected driver: docker
	I1026 09:16:27.730985  455005 start.go:925] validating driver "docker" against &{Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:27.731120  455005 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:16:27.731224  455005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:16:27.792549  455005 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:16:27.77733343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:16:27.793173  455005 cni.go:84] Creating CNI manager for ""
	I1026 09:16:27.793298  455005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:16:27.793351  455005 start.go:349] cluster config:
	{Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:27.798409  455005 out.go:179] * Starting "pause-827956" primary control-plane node in "pause-827956" cluster
	I1026 09:16:27.801385  455005 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:16:27.804440  455005 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:16:27.807366  455005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:16:27.807430  455005 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:16:27.807444  455005 cache.go:58] Caching tarball of preloaded images
	I1026 09:16:27.807457  455005 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:16:27.807532  455005 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:16:27.807543  455005 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:16:27.807684  455005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/config.json ...
	I1026 09:16:27.828246  455005 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:16:27.828271  455005 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:16:27.828284  455005 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:16:27.828306  455005 start.go:360] acquireMachinesLock for pause-827956: {Name:mkdcebf819592c6458943985de21a55c0d7f88a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:16:27.828364  455005 start.go:364] duration metric: took 35.832µs to acquireMachinesLock for "pause-827956"
	I1026 09:16:27.828394  455005 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:16:27.828403  455005 fix.go:54] fixHost starting: 
	I1026 09:16:27.828663  455005 cli_runner.go:164] Run: docker container inspect pause-827956 --format={{.State.Status}}
	I1026 09:16:27.845298  455005 fix.go:112] recreateIfNeeded on pause-827956: state=Running err=<nil>
	W1026 09:16:27.845327  455005 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:16:25.904298  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.404146  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.904522  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.404071  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.904841  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:27.904922  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:27.947240  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:27.947264  445201 cri.go:89] found id: ""
	I1026 09:16:27.947272  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:27.947327  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.951068  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:27.951138  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:27.979942  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:27.979965  445201 cri.go:89] found id: ""
	I1026 09:16:27.979973  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:27.980024  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.984014  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:27.984083  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:28.016316  445201 cri.go:89] found id: ""
	I1026 09:16:28.016345  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.016354  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:28.016360  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:28.016418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:28.073058  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.073079  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.073084  445201 cri.go:89] found id: ""
	I1026 09:16:28.073091  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:28.073155  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.078209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.082591  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:28.082666  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:28.127339  445201 cri.go:89] found id: ""
	I1026 09:16:28.127360  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.127369  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:28.127375  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:28.127432  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:28.162170  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:28.162189  445201 cri.go:89] found id: "bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.162194  445201 cri.go:89] found id: ""
	I1026 09:16:28.162202  445201 logs.go:282] 2 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07]
	I1026 09:16:28.162261  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.166628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.175901  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:28.175972  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:28.204244  445201 cri.go:89] found id: ""
	I1026 09:16:28.204272  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.204281  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:28.204287  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:28.204351  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:28.235132  445201 cri.go:89] found id: ""
	I1026 09:16:28.235153  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.235162  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:28.235171  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:28.235182  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:28.414267  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:28.414344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:28.447217  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:28.447250  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:28.578852  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:28.578925  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.650133  445201 logs.go:123] Gathering logs for kube-controller-manager [bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07] ...
	I1026 09:16:28.650171  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.679124  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:28.679149  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:28.785166  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:28.785258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:28.823935  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:28.823966  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:28.910201  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:28.910223  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:28.910236  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:28.953498  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:28.953549  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.986007  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:28.986032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:27.848444  455005 out.go:252] * Updating the running docker "pause-827956" container ...
	I1026 09:16:27.848480  455005 machine.go:93] provisionDockerMachine start ...
	I1026 09:16:27.848575  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:27.865944  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:27.866271  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:27.866316  455005 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:16:28.027090  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-827956
	
	I1026 09:16:28.027175  455005 ubuntu.go:182] provisioning hostname "pause-827956"
	I1026 09:16:28.027282  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.059041  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:28.059356  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:28.059368  455005 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-827956 && echo "pause-827956" | sudo tee /etc/hostname
	I1026 09:16:28.235612  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-827956
	
	I1026 09:16:28.235700  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.262981  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:28.263342  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:28.263374  455005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-827956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-827956/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-827956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:16:28.423172  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:16:28.423196  455005 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:16:28.423233  455005 ubuntu.go:190] setting up certificates
	I1026 09:16:28.423244  455005 provision.go:84] configureAuth start
	I1026 09:16:28.423317  455005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-827956
	I1026 09:16:28.448828  455005 provision.go:143] copyHostCerts
	I1026 09:16:28.448896  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:16:28.448912  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:16:28.450333  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:16:28.450486  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:16:28.450494  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:16:28.450529  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:16:28.450589  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:16:28.450594  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:16:28.450617  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:16:28.450671  455005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.pause-827956 san=[127.0.0.1 192.168.85.2 localhost minikube pause-827956]
	I1026 09:16:28.874222  455005 provision.go:177] copyRemoteCerts
	I1026 09:16:28.874331  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:16:28.874414  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.893037  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:29.005495  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:16:29.026154  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 09:16:29.047452  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:16:29.065085  455005 provision.go:87] duration metric: took 641.813536ms to configureAuth
	I1026 09:16:29.065156  455005 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:16:29.065428  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:29.065590  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:29.083509  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:29.083817  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:29.083836  455005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:16:31.528088  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:31.538411  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:31.538477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:31.564156  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:31.564176  445201 cri.go:89] found id: ""
	I1026 09:16:31.564184  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:31.564240  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.567923  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:31.567989  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:31.597656  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:31.597675  445201 cri.go:89] found id: ""
	I1026 09:16:31.597683  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:31.597736  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.601689  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:31.601768  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:31.629019  445201 cri.go:89] found id: ""
	I1026 09:16:31.629044  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.629053  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:31.629060  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:31.629126  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:31.656015  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:31.656036  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.656041  445201 cri.go:89] found id: ""
	I1026 09:16:31.656048  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:31.656102  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.659825  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.663471  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:31.663540  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:31.689125  445201 cri.go:89] found id: ""
	I1026 09:16:31.689152  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.689160  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:31.689167  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:31.689294  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:31.718149  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.718169  445201 cri.go:89] found id: ""
	I1026 09:16:31.718177  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:31.718228  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.721878  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:31.721959  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:31.749071  445201 cri.go:89] found id: ""
	I1026 09:16:31.749098  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.749108  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:31.749115  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:31.749242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:31.776036  445201 cri.go:89] found id: ""
	I1026 09:16:31.776102  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.776146  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:31.776181  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:31.776199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.807779  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:31.807810  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.835831  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:31.835874  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:31.923892  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:31.923944  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:32.064335  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:32.064376  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:32.100733  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:32.100770  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:32.129916  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:32.129948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:32.146572  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:32.146612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:32.212390  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:32.212413  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:32.212440  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:32.303251  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:32.303290  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:34.854846  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:34.867774  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:34.867849  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:34.895116  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:34.895137  445201 cri.go:89] found id: ""
	I1026 09:16:34.895144  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:34.895202  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.899593  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:34.899663  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:34.928083  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:34.928102  445201 cri.go:89] found id: ""
	I1026 09:16:34.928110  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:34.928193  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.932132  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:34.932202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:34.964519  445201 cri.go:89] found id: ""
	I1026 09:16:34.964541  445201 logs.go:282] 0 containers: []
	W1026 09:16:34.964550  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:34.964556  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:34.964614  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:35.003980  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.004001  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:35.004006  445201 cri.go:89] found id: ""
	I1026 09:16:35.004015  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:35.004080  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.009750  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.015183  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:35.015256  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:35.057026  445201 cri.go:89] found id: ""
	I1026 09:16:35.057052  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.057061  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:35.057067  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:35.057154  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:35.093208  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.093231  445201 cri.go:89] found id: ""
	I1026 09:16:35.093240  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:35.093328  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.098073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:35.098201  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:35.146763  445201 cri.go:89] found id: ""
	I1026 09:16:35.146836  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.146866  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:35.146887  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:35.146980  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:35.193198  445201 cri.go:89] found id: ""
	I1026 09:16:35.193224  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.193233  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:35.193276  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:35.193295  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:35.209978  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:35.210006  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.270161  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:35.270197  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.306149  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:35.306226  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:35.404229  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:35.404265  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:34.421750  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:16:34.421777  455005 machine.go:96] duration metric: took 6.573288083s to provisionDockerMachine
	I1026 09:16:34.421789  455005 start.go:293] postStartSetup for "pause-827956" (driver="docker")
	I1026 09:16:34.421808  455005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:16:34.421873  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:16:34.421917  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.448082  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.554549  455005 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:16:34.557864  455005 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:16:34.557894  455005 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:16:34.557905  455005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:16:34.557981  455005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:16:34.558092  455005 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:16:34.558197  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:16:34.565595  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:16:34.584236  455005 start.go:296] duration metric: took 162.431308ms for postStartSetup
	I1026 09:16:34.584382  455005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:16:34.584449  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.601533  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.704560  455005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:16:34.709547  455005 fix.go:56] duration metric: took 6.881137424s for fixHost
	I1026 09:16:34.709573  455005 start.go:83] releasing machines lock for "pause-827956", held for 6.881196428s
	I1026 09:16:34.709645  455005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-827956
	I1026 09:16:34.727063  455005 ssh_runner.go:195] Run: cat /version.json
	I1026 09:16:34.727113  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.727113  455005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:16:34.727170  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.746193  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.748794  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.846566  455005 ssh_runner.go:195] Run: systemctl --version
	I1026 09:16:34.942243  455005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:16:34.990055  455005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:16:34.995544  455005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:16:34.995638  455005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:16:35.005970  455005 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:16:35.006020  455005 start.go:495] detecting cgroup driver to use...
	I1026 09:16:35.006112  455005 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:16:35.006197  455005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:16:35.025491  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:16:35.042168  455005 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:16:35.042284  455005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:16:35.062105  455005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:16:35.079592  455005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:16:35.247261  455005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:16:35.434244  455005 docker.go:234] disabling docker service ...
	I1026 09:16:35.434314  455005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:16:35.462137  455005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:16:35.476462  455005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:16:35.661138  455005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:16:35.851498  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:16:35.866184  455005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:16:35.881141  455005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:16:35.881207  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.893379  455005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:16:35.893441  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.904393  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.915367  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.923900  455005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:16:35.941609  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.952905  455005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.962668  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.974177  455005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:16:35.982575  455005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:16:35.989751  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:36.124978  455005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:16:36.549120  455005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:16:36.549195  455005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:16:36.553041  455005 start.go:563] Will wait 60s for crictl version
	I1026 09:16:36.553131  455005 ssh_runner.go:195] Run: which crictl
	I1026 09:16:36.556668  455005 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:16:36.582568  455005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:16:36.582680  455005 ssh_runner.go:195] Run: crio --version
	I1026 09:16:36.616288  455005 ssh_runner.go:195] Run: crio --version
	I1026 09:16:36.693056  455005 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:16:36.696051  455005 cli_runner.go:164] Run: docker network inspect pause-827956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:16:36.720275  455005 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:16:36.725128  455005 kubeadm.go:883] updating cluster {Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:16:36.725261  455005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:16:36.725312  455005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:16:36.817079  455005 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:16:36.817100  455005 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:16:36.817153  455005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:16:36.883122  455005 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:16:36.883142  455005 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:16:36.883150  455005 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:16:36.883636  455005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-827956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:16:36.883737  455005 ssh_runner.go:195] Run: crio config
	I1026 09:16:37.019180  455005 cni.go:84] Creating CNI manager for ""
	I1026 09:16:37.019252  455005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:16:37.019287  455005 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:16:37.019345  455005 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-827956 NodeName:pause-827956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:16:37.019532  455005 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-827956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:16:37.019645  455005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:16:37.044090  455005 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:16:37.044218  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:16:37.055630  455005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1026 09:16:37.080014  455005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:16:37.098631  455005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 09:16:37.124359  455005 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:16:37.131103  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:37.389640  455005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:16:37.405879  455005 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956 for IP: 192.168.85.2
	I1026 09:16:37.405953  455005 certs.go:195] generating shared ca certs ...
	I1026 09:16:37.405985  455005 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:37.406183  455005 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:16:37.406256  455005 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:16:37.406291  455005 certs.go:257] generating profile certs ...
	I1026 09:16:37.406419  455005 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key
	I1026 09:16:37.406529  455005 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.key.ba406644
	I1026 09:16:37.406649  455005 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.key
	I1026 09:16:37.406833  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:16:37.406891  455005 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:16:37.406916  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:16:37.406976  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:16:37.407024  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:16:37.407081  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:16:37.407154  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:16:37.407804  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:16:37.448838  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:16:37.479815  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:16:37.508898  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:16:37.541412  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 09:16:37.578685  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:16:37.613557  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:16:37.636814  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:16:37.660729  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:16:37.678341  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:16:37.696053  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:16:37.720352  455005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:16:37.741552  455005 ssh_runner.go:195] Run: openssl version
	I1026 09:16:37.763132  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:16:37.779261  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.786015  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.786086  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.858955  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:16:37.867699  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:16:37.876635  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.880917  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.880982  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.924984  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:16:37.933896  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:16:37.943657  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:16:37.951008  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:16:37.951084  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:16:38.001805  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:16:38.013566  455005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:16:38.018523  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:16:38.065363  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:16:38.110952  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:16:38.153764  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:16:38.196997  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:16:38.246379  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:16:38.294647  455005 kubeadm.go:400] StartCluster: {Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:38.294787  455005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:16:38.294858  455005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:16:38.334769  455005 cri.go:89] found id: "abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0"
	I1026 09:16:38.334813  455005 cri.go:89] found id: "d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd"
	I1026 09:16:38.334820  455005 cri.go:89] found id: "ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7"
	I1026 09:16:38.334824  455005 cri.go:89] found id: "e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a"
	I1026 09:16:38.334827  455005 cri.go:89] found id: "ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656"
	I1026 09:16:38.334831  455005 cri.go:89] found id: "6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd"
	I1026 09:16:38.334836  455005 cri.go:89] found id: "12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2"
	I1026 09:16:38.334839  455005 cri.go:89] found id: "5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	I1026 09:16:38.334842  455005 cri.go:89] found id: "17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e"
	I1026 09:16:38.334851  455005 cri.go:89] found id: "e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9"
	I1026 09:16:38.334866  455005 cri.go:89] found id: "89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde"
	I1026 09:16:38.334870  455005 cri.go:89] found id: "4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	I1026 09:16:38.334873  455005 cri.go:89] found id: "37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d"
	I1026 09:16:38.334878  455005 cri.go:89] found id: "6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a"
	I1026 09:16:38.334888  455005 cri.go:89] found id: ""
	I1026 09:16:38.334970  455005 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:16:38.348923  455005 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:16:38.349012  455005 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:16:38.358687  455005 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:16:38.358757  455005 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:16:38.358809  455005 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:16:38.367239  455005 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:16:38.368018  455005 kubeconfig.go:125] found "pause-827956" server: "https://192.168.85.2:8443"
	I1026 09:16:38.369012  455005 kapi.go:59] client config for pause-827956: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:16:38.369675  455005 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 09:16:38.369695  455005 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 09:16:38.369701  455005 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 09:16:38.369746  455005 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 09:16:38.369752  455005 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 09:16:38.370139  455005 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:16:38.386286  455005 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:16:38.386323  455005 kubeadm.go:601] duration metric: took 27.557917ms to restartPrimaryControlPlane
	I1026 09:16:38.386332  455005 kubeadm.go:402] duration metric: took 91.69617ms to StartCluster
	I1026 09:16:38.386357  455005 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:38.386431  455005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:16:38.387467  455005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:38.387735  455005 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:16:38.388066  455005 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:16:38.388432  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:38.393664  455005 out.go:179] * Enabled addons: 
	I1026 09:16:38.393728  455005 out.go:179] * Verifying Kubernetes components...
	I1026 09:16:35.461433  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:35.461514  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:35.632313  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:35.632388  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 09:16:35.704335  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:16:35.710695  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:35.710727  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:35.710741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	W1026 09:16:35.829972  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.830049  445201 retry.go:31] will retry after 24.298648909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.900420  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:35.900498  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:35.943003  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:35.943071  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.478842  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:38.499470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:38.499549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:38.561966  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:38.561991  445201 cri.go:89] found id: ""
	I1026 09:16:38.562000  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:38.562054  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.565941  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:38.566019  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:38.618045  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:38.618070  445201 cri.go:89] found id: ""
	I1026 09:16:38.618078  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:38.618133  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.622308  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:38.622382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:38.685130  445201 cri.go:89] found id: ""
	I1026 09:16:38.685158  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.685167  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:38.685173  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:38.685237  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:38.741160  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:38.741185  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.741190  445201 cri.go:89] found id: ""
	I1026 09:16:38.741197  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:38.741253  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.745509  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.749859  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:38.749939  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:38.787906  445201 cri.go:89] found id: ""
	I1026 09:16:38.787934  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.787943  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:38.787949  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:38.788007  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:38.843118  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:38.843144  445201 cri.go:89] found id: ""
	I1026 09:16:38.843153  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:38.843209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.851429  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:38.851513  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:38.911991  445201 cri.go:89] found id: ""
	I1026 09:16:38.912018  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.912027  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:38.912033  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:38.912093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:38.964607  445201 cri.go:89] found id: ""
	I1026 09:16:38.964634  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.964643  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:38.964657  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:38.964668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:39.099227  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:39.099252  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:39.099266  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:39.257625  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:39.257710  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:39.323941  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:39.323981  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:39.400574  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:39.400612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:39.477668  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:39.477707  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:39.647471  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:39.647511  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:39.687069  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:39.687105  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:39.730533  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:39.730615  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:39.781743  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:39.781825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:38.396509  455005 addons.go:514] duration metric: took 8.43828ms for enable addons: enabled=[]
	I1026 09:16:38.396608  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:38.753467  455005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:16:38.777771  455005 node_ready.go:35] waiting up to 6m0s for node "pause-827956" to be "Ready" ...
	I1026 09:16:42.130375  455005 node_ready.go:49] node "pause-827956" is "Ready"
	I1026 09:16:42.130411  455005 node_ready.go:38] duration metric: took 3.352604487s for node "pause-827956" to be "Ready" ...
	I1026 09:16:42.130426  455005 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:16:42.130495  455005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:42.149355  455005 api_server.go:72] duration metric: took 3.761582632s to wait for apiserver process to appear ...
	I1026 09:16:42.149385  455005 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:16:42.149406  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:42.218608  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:42.218642  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:42.397606  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:42.417937  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:42.418016  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:42.495359  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:42.495378  445201 cri.go:89] found id: ""
	I1026 09:16:42.495386  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:42.495440  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.501607  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:42.501703  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:42.551870  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:42.551932  445201 cri.go:89] found id: ""
	I1026 09:16:42.551954  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:42.552046  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.556449  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:42.556566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:42.586577  445201 cri.go:89] found id: ""
	I1026 09:16:42.586646  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.586678  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:42.586703  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:42.586820  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:42.617869  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:42.617892  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:42.617897  445201 cri.go:89] found id: ""
	I1026 09:16:42.617915  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:42.617970  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.625871  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.630165  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:42.630242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:42.666758  445201 cri.go:89] found id: ""
	I1026 09:16:42.666834  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.666858  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:42.666880  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:42.666965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:42.701859  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:42.701926  445201 cri.go:89] found id: ""
	I1026 09:16:42.701948  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:42.702031  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.706073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:42.706198  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:42.734210  445201 cri.go:89] found id: ""
	I1026 09:16:42.734238  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.734247  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:42.734253  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:42.734316  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:42.761520  445201 cri.go:89] found id: ""
	I1026 09:16:42.761543  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.761561  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:42.761577  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:42.761588  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:42.852029  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:42.852074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:42.900883  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:42.900926  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:42.919113  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:42.919143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:43.013913  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:43.013936  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:43.013948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:43.112560  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:43.112603  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:43.165342  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:43.165375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:43.197760  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:43.197792  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:43.240362  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:43.240393  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:43.395807  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:43.395853  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:42.650177  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:42.660274  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:42.660302  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:43.149472  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:43.163458  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:43.163487  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:43.650188  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:43.658431  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 09:16:43.659487  455005 api_server.go:141] control plane version: v1.34.1
	I1026 09:16:43.659523  455005 api_server.go:131] duration metric: took 1.510131165s to wait for apiserver health ...
	I1026 09:16:43.659532  455005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:16:43.663384  455005 system_pods.go:59] 7 kube-system pods found
	I1026 09:16:43.663426  455005 system_pods.go:61] "coredns-66bc5c9577-55zjj" [7fc468bf-1986-4172-8eba-98945beb861a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:16:43.663436  455005 system_pods.go:61] "etcd-pause-827956" [03cc5c30-6b85-4b59-ba1d-75d4036c44a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:16:43.663443  455005 system_pods.go:61] "kindnet-xws2g" [07c1368d-3ddb-492b-90ea-8c001e45fbe5] Running
	I1026 09:16:43.663451  455005 system_pods.go:61] "kube-apiserver-pause-827956" [33092d00-f827-4599-9a46-880483bf6300] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:16:43.663499  455005 system_pods.go:61] "kube-controller-manager-pause-827956" [bc582be5-af80-46ec-a948-b5b66b211dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:16:43.663515  455005 system_pods.go:61] "kube-proxy-256pg" [0f084e0d-f221-4e24-ab7a-5ae1cb414b56] Running
	I1026 09:16:43.663522  455005 system_pods.go:61] "kube-scheduler-pause-827956" [b3e4894b-19a5-4506-9cfe-c7ae807e139e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:16:43.663529  455005 system_pods.go:74] duration metric: took 3.990497ms to wait for pod list to return data ...
	I1026 09:16:43.663543  455005 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:16:43.666244  455005 default_sa.go:45] found service account: "default"
	I1026 09:16:43.666295  455005 default_sa.go:55] duration metric: took 2.7456ms for default service account to be created ...
	I1026 09:16:43.666307  455005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:16:43.669581  455005 system_pods.go:86] 7 kube-system pods found
	I1026 09:16:43.669619  455005 system_pods.go:89] "coredns-66bc5c9577-55zjj" [7fc468bf-1986-4172-8eba-98945beb861a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:16:43.669633  455005 system_pods.go:89] "etcd-pause-827956" [03cc5c30-6b85-4b59-ba1d-75d4036c44a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:16:43.669643  455005 system_pods.go:89] "kindnet-xws2g" [07c1368d-3ddb-492b-90ea-8c001e45fbe5] Running
	I1026 09:16:43.669654  455005 system_pods.go:89] "kube-apiserver-pause-827956" [33092d00-f827-4599-9a46-880483bf6300] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:16:43.669667  455005 system_pods.go:89] "kube-controller-manager-pause-827956" [bc582be5-af80-46ec-a948-b5b66b211dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:16:43.669677  455005 system_pods.go:89] "kube-proxy-256pg" [0f084e0d-f221-4e24-ab7a-5ae1cb414b56] Running
	I1026 09:16:43.669684  455005 system_pods.go:89] "kube-scheduler-pause-827956" [b3e4894b-19a5-4506-9cfe-c7ae807e139e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:16:43.669692  455005 system_pods.go:126] duration metric: took 3.378789ms to wait for k8s-apps to be running ...
	I1026 09:16:43.669708  455005 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:16:43.669786  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:16:43.683216  455005 system_svc.go:56] duration metric: took 13.497458ms WaitForService to wait for kubelet
	I1026 09:16:43.683258  455005 kubeadm.go:586] duration metric: took 5.295490772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:16:43.683278  455005 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:16:43.686311  455005 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:16:43.686339  455005 node_conditions.go:123] node cpu capacity is 2
	I1026 09:16:43.686352  455005 node_conditions.go:105] duration metric: took 3.067738ms to run NodePressure ...
	I1026 09:16:43.686364  455005 start.go:241] waiting for startup goroutines ...
	I1026 09:16:43.686371  455005 start.go:246] waiting for cluster config update ...
	I1026 09:16:43.686380  455005 start.go:255] writing updated cluster config ...
	I1026 09:16:43.686756  455005 ssh_runner.go:195] Run: rm -f paused
	I1026 09:16:43.690193  455005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:16:43.690989  455005 kapi.go:59] client config for pause-827956: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:16:43.693929  455005 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-55zjj" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:16:45.702181  455005 pod_ready.go:104] pod "coredns-66bc5c9577-55zjj" is not "Ready", error: <nil>
	I1026 09:16:45.947614  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:45.958234  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:45.958307  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:45.983853  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:45.983876  445201 cri.go:89] found id: ""
	I1026 09:16:45.983884  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:45.983938  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:45.987866  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:45.987940  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:46.014318  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.014341  445201 cri.go:89] found id: ""
	I1026 09:16:46.014350  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:46.014411  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.018353  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:46.018426  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:46.048052  445201 cri.go:89] found id: ""
	I1026 09:16:46.048078  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.048086  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:46.048093  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:46.048200  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:46.076123  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.076197  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.076208  445201 cri.go:89] found id: ""
	I1026 09:16:46.076223  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:46.076283  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.080561  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.084247  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:46.084318  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:46.115050  445201 cri.go:89] found id: ""
	I1026 09:16:46.115076  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.115085  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:46.115104  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:46.115163  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:46.142084  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.142107  445201 cri.go:89] found id: ""
	I1026 09:16:46.142115  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:46.142194  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.146126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:46.146204  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:46.173893  445201 cri.go:89] found id: ""
	I1026 09:16:46.173928  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.173939  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:46.173945  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:46.174009  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:46.207007  445201 cri.go:89] found id: ""
	I1026 09:16:46.207074  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.207098  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:46.207121  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:46.207146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:46.281196  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:46.281257  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:46.281286  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:46.365641  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:46.365679  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.395828  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:46.395859  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:46.429785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:46.429825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:46.450173  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:46.450204  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.493277  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:46.493310  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.552581  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:46.552619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.588862  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:46.588892  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:46.679688  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:46.679719  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.344869  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:49.356555  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:49.356623  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:49.389547  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.389571  445201 cri.go:89] found id: ""
	I1026 09:16:49.389580  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:49.389639  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.393152  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:49.393236  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:49.419217  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:49.419240  445201 cri.go:89] found id: ""
	I1026 09:16:49.419249  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:49.419320  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.423239  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:49.423309  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:49.453234  445201 cri.go:89] found id: ""
	I1026 09:16:49.453257  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.453266  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:49.453272  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:49.453335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:49.483822  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:49.483846  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:49.483851  445201 cri.go:89] found id: ""
	I1026 09:16:49.483859  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:49.483912  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.487530  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.491109  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:49.491181  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:49.520353  445201 cri.go:89] found id: ""
	I1026 09:16:49.520374  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.520383  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:49.520389  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:49.520448  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:49.546473  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:49.546502  445201 cri.go:89] found id: ""
	I1026 09:16:49.546510  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:49.546569  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.550263  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:49.550336  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:49.583559  445201 cri.go:89] found id: ""
	I1026 09:16:49.583588  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.583597  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:49.583604  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:49.583661  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:49.608811  445201 cri.go:89] found id: ""
	I1026 09:16:49.608834  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.608842  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:49.608856  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:49.608867  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:49.641306  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:49.641330  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.786127  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:49.786165  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:49.892036  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:49.892055  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:49.892068  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.995466  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:49.995550  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:50.055401  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:50.055506  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:50.086674  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:50.086794  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:50.174213  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:50.174252  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:50.195967  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:50.196051  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:50.250465  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:50.250500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	W1026 09:16:48.199984  455005 pod_ready.go:104] pod "coredns-66bc5c9577-55zjj" is not "Ready", error: <nil>
	I1026 09:16:49.199566  455005 pod_ready.go:94] pod "coredns-66bc5c9577-55zjj" is "Ready"
	I1026 09:16:49.199597  455005 pod_ready.go:86] duration metric: took 5.50564435s for pod "coredns-66bc5c9577-55zjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.202343  455005 pod_ready.go:83] waiting for pod "etcd-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.207635  455005 pod_ready.go:94] pod "etcd-pause-827956" is "Ready"
	I1026 09:16:49.207670  455005 pod_ready.go:86] duration metric: took 5.303916ms for pod "etcd-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.210161  455005 pod_ready.go:83] waiting for pod "kube-apiserver-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.218330  455005 pod_ready.go:94] pod "kube-apiserver-pause-827956" is "Ready"
	I1026 09:16:50.218353  455005 pod_ready.go:86] duration metric: took 1.008168996s for pod "kube-apiserver-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.223635  455005 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.229706  455005 pod_ready.go:94] pod "kube-controller-manager-pause-827956" is "Ready"
	I1026 09:16:50.229730  455005 pod_ready.go:86] duration metric: took 6.072492ms for pod "kube-controller-manager-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.398044  455005 pod_ready.go:83] waiting for pod "kube-proxy-256pg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.797043  455005 pod_ready.go:94] pod "kube-proxy-256pg" is "Ready"
	I1026 09:16:50.797067  455005 pod_ready.go:86] duration metric: took 398.955902ms for pod "kube-proxy-256pg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.998254  455005 pod_ready.go:83] waiting for pod "kube-scheduler-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:51.397488  455005 pod_ready.go:94] pod "kube-scheduler-pause-827956" is "Ready"
	I1026 09:16:51.397519  455005 pod_ready.go:86] duration metric: took 399.194921ms for pod "kube-scheduler-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:51.397532  455005 pod_ready.go:40] duration metric: took 7.707306268s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:16:51.454178  455005 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:16:51.457385  455005 out.go:179] * Done! kubectl is now configured to use "pause-827956" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.974151099Z" level=info msg="Created container d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd: kube-system/coredns-66bc5c9577-55zjj/coredns" id=608f73e5-09ec-4433-a963-36482eee98a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.974578296Z" level=info msg="Started container" PID=2158 containerID=e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a description=kube-system/etcd-pause-827956/etcd id=af90b639-5e3c-4f1e-a8f4-f8f7d139b7ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=c15c154dd600aa6bf61c99a50e62a2c496889bac5085f44cb7ad21a73241a191
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.975461482Z" level=info msg="Starting container: d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd" id=1b0cb93c-7460-411d-aa4c-44b51390a258 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.99680911Z" level=info msg="Started container" PID=2170 containerID=d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd description=kube-system/coredns-66bc5c9577-55zjj/coredns id=1b0cb93c-7460-411d-aa4c-44b51390a258 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dee9282bb453234d3c9f4e1ca5ac5c2c4ab90a09b504a15cedb7434704573712
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.014944842Z" level=info msg="Created container abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0: kube-system/kube-apiserver-pause-827956/kube-apiserver" id=1ca8c862-8b35-4c20-b1a2-316189ab2fbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.015848451Z" level=info msg="Starting container: abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0" id=0c93bd4f-8fad-4720-9585-acc3b217a3a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.019324915Z" level=info msg="Started container" PID=2197 containerID=abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0 description=kube-system/kube-apiserver-pause-827956/kube-apiserver id=0c93bd4f-8fad-4720-9585-acc3b217a3a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=738bcc2f62101bb50a4162e9fe8def6cedba8bebb86189ec2ff7d8b770f83192
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.233721479Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.23740931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.237446012Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.237471637Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241029743Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241066133Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241090995Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.244192193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.24422686Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.244249991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247424158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247469746Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247494575Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250853401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250888618Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250911929Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.254011593Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.254046244Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	abeec7c4eb27a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago       Running             kube-apiserver            1                   738bcc2f62101       kube-apiserver-pause-827956            kube-system
	d76caf76791ff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   17 seconds ago       Running             coredns                   1                   dee9282bb4532       coredns-66bc5c9577-55zjj               kube-system
	ab4c4b82eb630       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   17 seconds ago       Running             kube-proxy                1                   9e725f779785e       kube-proxy-256pg                       kube-system
	e83d0bbe6c148       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago       Running             etcd                      1                   c15c154dd600a       etcd-pause-827956                      kube-system
	ce03a3f9e33c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   17 seconds ago       Running             kindnet-cni               1                   f22039d015ac6       kindnet-xws2g                          kube-system
	6662ea14a334a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago       Running             kube-controller-manager   1                   2071a9e92c010       kube-controller-manager-pause-827956   kube-system
	12bfc7b7ec9ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago       Running             kube-scheduler            1                   f870281ad79b0       kube-scheduler-pause-827956            kube-system
	5a42e583dd72c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   28 seconds ago       Exited              coredns                   0                   dee9282bb4532       coredns-66bc5c9577-55zjj               kube-system
	17d2f4193961a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f22039d015ac6       kindnet-xws2g                          kube-system
	e2303b10c782a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   9e725f779785e       kube-proxy-256pg                       kube-system
	89a45b6148f73       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2071a9e92c010       kube-controller-manager-pause-827956   kube-system
	4fe133254cda6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   738bcc2f62101       kube-apiserver-pause-827956            kube-system
	37f3a1c3ea560       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c15c154dd600a       etcd-pause-827956                      kube-system
	6ae98efebf5b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   f870281ad79b0       kube-scheduler-pause-827956            kube-system
	
	
	==> coredns [5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49014 - 57664 "HINFO IN 7018778514784035254.560845710521869690. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022597217s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48721 - 49217 "HINFO IN 6978744325466965760.3353292712821291265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030323555s
	
	
	==> describe nodes <==
	Name:               pause-827956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-827956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=pause-827956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_15_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-827956
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:16:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-827956
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ebc43747-cbfd-4a43-9763-85deb5eb87af
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-55zjj                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     71s
	  kube-system                 etcd-pause-827956                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         76s
	  kube-system                 kindnet-xws2g                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      71s
	  kube-system                 kube-apiserver-pause-827956             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-827956    200m (10%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-proxy-256pg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-827956             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 69s   kube-proxy       
	  Normal   Starting                 12s   kube-proxy       
	  Normal   Starting                 76s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 76s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s   kubelet          Node pause-827956 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s   kubelet          Node pause-827956 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s   kubelet          Node pause-827956 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           72s   node-controller  Node pause-827956 event: Registered Node pause-827956 in Controller
	  Normal   NodeReady                29s   kubelet          Node pause-827956 status is now: NodeReady
	  Normal   RegisteredNode           9s    node-controller  Node pause-827956 event: Registered Node pause-827956 in Controller
	
	
	==> dmesg <==
	[Oct26 08:45] overlayfs: idmapped layers are currently not supported
	[Oct26 08:50] overlayfs: idmapped layers are currently not supported
	[  +3.466267] overlayfs: idmapped layers are currently not supported
	[Oct26 08:51] overlayfs: idmapped layers are currently not supported
	[Oct26 08:52] overlayfs: idmapped layers are currently not supported
	[ +49.561224] hrtimer: interrupt took 37499666 ns
	[Oct26 08:53] overlayfs: idmapped layers are currently not supported
	[Oct26 08:58] overlayfs: idmapped layers are currently not supported
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d] <==
	{"level":"warn","ts":"2025-10-26T09:15:33.621243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.663197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.731999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.749874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.797850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.855510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:34.099464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T09:16:29.257144Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T09:16:29.257203Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-827956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-26T09:16:29.257291Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T09:16:29.527692Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T09:16:29.529196Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.529249Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-26T09:16:29.529317Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T09:16:29.529335Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529445Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T09:16:29.529489Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529578Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529596Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T09:16:29.529603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.532631Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-26T09:16:29.532714Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.532757Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T09:16:29.532782Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-827956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a] <==
	{"level":"warn","ts":"2025-10-26T09:16:40.887992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.907646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.928693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.946320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.958199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.977532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.999244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.011525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.027477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.046924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.062054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.077248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.093579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.106987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.127790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.136867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.155156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.168523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.184249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.202364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.221147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.259263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.278320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.293409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.347165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:16:54 up  2:59,  0 user,  load average: 3.54, 3.53, 2.67
	Linux pause-827956 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e] <==
	I1026 09:15:44.512222       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:15:44.598986       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:15:44.599190       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:15:44.599231       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:15:44.599268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:15:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:15:44.799793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:15:44.799869       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:15:44.799902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:15:44.800870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:16:14.800023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:16:14.801224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:16:14.801233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:16:14.801327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 09:16:16.100966       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:16:16.100997       1 metrics.go:72] Registering metrics
	I1026 09:16:16.101049       1 controller.go:711] "Syncing nftables rules"
	I1026 09:16:24.801664       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:16:24.801819       1 main.go:301] handling current node
	
	
	==> kindnet [ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656] <==
	I1026 09:16:37.032212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:16:37.032527       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:16:37.032649       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:16:37.032661       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:16:37.032675       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:16:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:16:37.239275       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:16:37.239363       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:16:37.239398       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:16:37.239874       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 09:16:42.241880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:16:42.241918       1 metrics.go:72] Registering metrics
	I1026 09:16:42.242006       1 controller.go:711] "Syncing nftables rules"
	I1026 09:16:47.233313       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:16:47.233397       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217] <==
	W1026 09:16:29.276559       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276570       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276603       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276635       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276650       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276697       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276722       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276740       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276783       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276821       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276827       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276868       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276873       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276921       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276933       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276966       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276990       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276695       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276784       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277039       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277070       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276440       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277112       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277140       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0] <==
	I1026 09:16:42.187460       1 policy_source.go:240] refreshing policies
	I1026 09:16:42.189987       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:16:42.190029       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:16:42.190149       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:16:42.217159       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:16:42.219237       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:16:42.219379       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:16:42.232659       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:16:42.233950       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:16:42.239081       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:16:42.239167       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:16:42.239202       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:16:42.239234       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:16:42.260792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1026 09:16:42.267024       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:16:42.286909       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:16:42.293161       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 09:16:42.293684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 09:16:42.296639       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:16:42.893820       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:16:44.084499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:16:45.627037       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:16:45.677324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:16:45.826205       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:16:45.879087       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd] <==
	I1026 09:16:45.510295       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:16:45.513558       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 09:16:45.516766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:16:45.520450       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:16:45.520539       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:16:45.520459       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:16:45.520620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:16:45.521922       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:16:45.521982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 09:16:45.521944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 09:16:45.522066       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 09:16:45.521970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 09:16:45.522118       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:16:45.525268       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:16:45.530623       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:16:45.531763       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:16:45.532987       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:16:45.536270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:16:45.541613       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:16:45.543861       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:16:45.554280       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:16:45.565683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:16:45.571313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:16:45.571404       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:16:45.571474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde] <==
	I1026 09:15:42.646513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:15:42.652953       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:15:42.663195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:15:42.665017       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-827956" podCIDRs=["10.244.0.0/24"]
	I1026 09:15:42.665307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:15:42.673024       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:15:42.682801       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:15:42.683069       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:15:42.683134       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:15:42.683140       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 09:15:42.683369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:15:42.683510       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:15:42.683549       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:15:42.683588       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:15:42.683727       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:15:42.683782       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 09:15:42.683826       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:15:42.683739       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:15:42.688893       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:15:42.688990       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:15:42.689072       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-827956"
	I1026 09:15:42.689158       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 09:15:42.702244       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:15:42.718838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:16:27.695258       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7] <==
	I1026 09:16:38.702336       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:16:40.782374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:16:42.282595       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:16:42.282794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:16:42.282951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:16:42.348402       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:16:42.348556       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:16:42.353259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:16:42.353645       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:16:42.353833       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:16:42.355316       1 config.go:200] "Starting service config controller"
	I1026 09:16:42.355384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:16:42.355428       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:16:42.355456       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:16:42.355494       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:16:42.355520       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:16:42.356291       1 config.go:309] "Starting node config controller"
	I1026 09:16:42.356353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:16:42.356382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:16:42.458213       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:16:42.458387       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:16:42.458449       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9] <==
	I1026 09:15:44.503151       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:15:44.656277       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:15:44.756704       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:15:44.756743       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:15:44.756814       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:15:44.778572       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:15:44.778634       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:15:44.782526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:15:44.782898       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:15:44.783136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:15:44.784616       1 config.go:200] "Starting service config controller"
	I1026 09:15:44.784635       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:15:44.784655       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:15:44.784660       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:15:44.784671       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:15:44.784675       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:15:44.785591       1 config.go:309] "Starting node config controller"
	I1026 09:15:44.785615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:15:44.785622       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:15:44.885402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:15:44.885420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:15:44.885462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2] <==
	I1026 09:16:40.061671       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:16:42.026302       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 09:16:42.026349       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 09:16:42.026360       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:16:42.026394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:16:42.181204       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:16:42.181245       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:16:42.183604       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:42.183656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:42.186861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:16:42.186995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:16:42.286897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a] <==
	E1026 09:15:35.781105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:15:35.781597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:15:35.782963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:15:35.783040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:15:36.653482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:15:36.679105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:15:36.689993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:15:36.691208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:15:36.702033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:15:36.724429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:15:36.725846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:15:36.769694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:15:36.819556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:15:36.877464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:15:36.881016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:15:36.916805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:15:36.998912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:15:37.019022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:15:40.228984       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:29.257812       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 09:16:29.257963       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 09:16:29.257977       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 09:16:29.257998       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:29.258161       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 09:16:29.258200       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.674219    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: I1026 09:16:36.709678    1297 scope.go:117] "RemoveContainer" containerID="5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710327    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b2d3d0c239f2715756705b971b37c069" pod="kube-system/kube-scheduler-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710523    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="63e460761495c032591e443b4caff49c" pod="kube-system/kube-apiserver-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710703    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a305a42dfd59c6c950ca91762686cb4" pod="kube-system/etcd-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710891    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.711056    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-256pg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.713962    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.714166    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55zjj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7fc468bf-1986-4172-8eba-98945beb861a" pod="kube-system/coredns-66bc5c9577-55zjj"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: I1026 09:16:36.744195    1297 scope.go:117] "RemoveContainer" containerID="4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744474    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a305a42dfd59c6c950ca91762686cb4" pod="kube-system/etcd-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744700    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744874    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-256pg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745025    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745167    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55zjj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7fc468bf-1986-4172-8eba-98945beb861a" pod="kube-system/coredns-66bc5c9577-55zjj"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745320    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b2d3d0c239f2715756705b971b37c069" pod="kube-system/kube-scheduler-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745466    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="63e460761495c032591e443b4caff49c" pod="kube-system/kube-apiserver-pause-827956"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.052319    1297 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-827956\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.052991    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-827956\" is forbidden: User \"system:node:pause-827956\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.053333    1297 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-827956\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.120882    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-256pg\" is forbidden: User \"system:node:pause-827956\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:48 pause-827956 kubelet[1297]: W1026 09:16:48.706760    1297 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 26 09:16:51 pause-827956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:16:51 pause-827956 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:16:51 pause-827956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-827956 -n pause-827956
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-827956 -n pause-827956: exit status 2 (361.675582ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-827956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-827956
helpers_test.go:243: (dbg) docker inspect pause-827956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2",
	        "Created": "2025-10-26T09:15:12.929228265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:15:12.990173576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/hosts",
	        "LogPath": "/var/lib/docker/containers/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2/e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2-json.log",
	        "Name": "/pause-827956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-827956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-827956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e18cb5eb7e8e34ef847834143669c1c6bd3899fecab3aae2213449c0399597d2",
	                "LowerDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b0e71dd484b6e3669ca185b27206c2cbe5679a4de4afc1f7012b2d809310e9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-827956",
	                "Source": "/var/lib/docker/volumes/pause-827956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-827956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-827956",
	                "name.minikube.sigs.k8s.io": "pause-827956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27e369101a86bf66b24f7d966d0744e5a6b5ab56d71b22ddbd6f64691224c7a8",
	            "SandboxKey": "/var/run/docker/netns/27e369101a86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-827956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:74:c3:5e:8f:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d6410f3b87c8c4b8e5c46e60c5d5fab80acf7e540d601214a0dae27ba51e762b",
	                    "EndpointID": "7f250ebb4d63599c57b4944879c26f1dfa5b7dc46dddd0f4e0671c1bf7d8019a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-827956",
	                        "e18cb5eb7e8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-827956 -n pause-827956
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-827956 -n pause-827956: exit status 2 (359.087933ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-827956 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-827956 logs -n 25: (1.685794581s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:11 UTC │
	│ delete  │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:11 UTC │ 26 Oct 25 09:12 UTC │
	│ ssh     │ -p NoKubernetes-948910 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │                     │
	│ stop    │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p NoKubernetes-948910 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p missing-upgrade-019301 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-019301    │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:13 UTC │
	│ ssh     │ -p NoKubernetes-948910 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │                     │
	│ delete  │ -p NoKubernetes-948910                                                                                                                   │ NoKubernetes-948910       │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:12 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:12 UTC │ 26 Oct 25 09:13 UTC │
	│ stop    │ -p kubernetes-upgrade-275732                                                                                                             │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ delete  │ -p missing-upgrade-019301                                                                                                                │ missing-upgrade-019301    │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p stopped-upgrade-017998 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-017998    │ jenkins │ v1.32.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-275732 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-275732 │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │                     │
	│ stop    │ stopped-upgrade-017998 stop                                                                                                              │ stopped-upgrade-017998    │ jenkins │ v1.32.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:13 UTC │
	│ start   │ -p stopped-upgrade-017998 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-017998    │ jenkins │ v1.37.0 │ 26 Oct 25 09:13 UTC │ 26 Oct 25 09:14 UTC │
	│ delete  │ -p stopped-upgrade-017998                                                                                                                │ stopped-upgrade-017998    │ jenkins │ v1.37.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:14 UTC │
	│ start   │ -p running-upgrade-931705 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-931705    │ jenkins │ v1.32.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:14 UTC │
	│ start   │ -p running-upgrade-931705 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-931705    │ jenkins │ v1.37.0 │ 26 Oct 25 09:14 UTC │ 26 Oct 25 09:15 UTC │
	│ delete  │ -p running-upgrade-931705                                                                                                                │ running-upgrade-931705    │ jenkins │ v1.37.0 │ 26 Oct 25 09:15 UTC │ 26 Oct 25 09:15 UTC │
	│ start   │ -p pause-827956 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:15 UTC │ 26 Oct 25 09:16 UTC │
	│ start   │ -p pause-827956 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:16 UTC │ 26 Oct 25 09:16 UTC │
	│ pause   │ -p pause-827956 --alsologtostderr -v=5                                                                                                   │ pause-827956              │ jenkins │ v1.37.0 │ 26 Oct 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:16:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:16:27.599366  455005 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:16:27.599540  455005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:27.599554  455005 out.go:374] Setting ErrFile to fd 2...
	I1026 09:16:27.599560  455005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:16:27.599829  455005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:16:27.600218  455005 out.go:368] Setting JSON to false
	I1026 09:16:27.601263  455005 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10738,"bootTime":1761459450,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:16:27.601336  455005 start.go:141] virtualization:  
	I1026 09:16:27.604360  455005 out.go:179] * [pause-827956] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:16:27.608259  455005 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:16:27.608378  455005 notify.go:220] Checking for updates...
	I1026 09:16:27.614334  455005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:16:27.617255  455005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:16:27.620111  455005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:16:27.622963  455005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:16:27.626412  455005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:16:27.629867  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:27.630437  455005 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:16:27.655308  455005 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:16:27.655435  455005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:16:27.722983  455005 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:16:27.712899439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:16:27.723095  455005 docker.go:318] overlay module found
	I1026 09:16:27.728153  455005 out.go:179] * Using the docker driver based on existing profile
	I1026 09:16:27.730965  455005 start.go:305] selected driver: docker
	I1026 09:16:27.730985  455005 start.go:925] validating driver "docker" against &{Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:27.731120  455005 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:16:27.731224  455005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:16:27.792549  455005 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:16:27.77733343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:16:27.793173  455005 cni.go:84] Creating CNI manager for ""
	I1026 09:16:27.793298  455005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:16:27.793351  455005 start.go:349] cluster config:
	{Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:27.798409  455005 out.go:179] * Starting "pause-827956" primary control-plane node in "pause-827956" cluster
	I1026 09:16:27.801385  455005 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:16:27.804440  455005 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:16:27.807366  455005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:16:27.807430  455005 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:16:27.807444  455005 cache.go:58] Caching tarball of preloaded images
	I1026 09:16:27.807457  455005 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:16:27.807532  455005 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:16:27.807543  455005 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:16:27.807684  455005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/config.json ...
	I1026 09:16:27.828246  455005 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:16:27.828271  455005 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:16:27.828284  455005 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:16:27.828306  455005 start.go:360] acquireMachinesLock for pause-827956: {Name:mkdcebf819592c6458943985de21a55c0d7f88a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:16:27.828364  455005 start.go:364] duration metric: took 35.832µs to acquireMachinesLock for "pause-827956"
	I1026 09:16:27.828394  455005 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:16:27.828403  455005 fix.go:54] fixHost starting: 
	I1026 09:16:27.828663  455005 cli_runner.go:164] Run: docker container inspect pause-827956 --format={{.State.Status}}
	I1026 09:16:27.845298  455005 fix.go:112] recreateIfNeeded on pause-827956: state=Running err=<nil>
	W1026 09:16:27.845327  455005 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:16:25.904298  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.404146  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:26.904522  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.404071  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:27.904841  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:27.904922  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:27.947240  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:27.947264  445201 cri.go:89] found id: ""
	I1026 09:16:27.947272  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:27.947327  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.951068  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:27.951138  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:27.979942  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:27.979965  445201 cri.go:89] found id: ""
	I1026 09:16:27.979973  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:27.980024  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:27.984014  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:27.984083  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:28.016316  445201 cri.go:89] found id: ""
	I1026 09:16:28.016345  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.016354  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:28.016360  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:28.016418  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:28.073058  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.073079  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.073084  445201 cri.go:89] found id: ""
	I1026 09:16:28.073091  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:28.073155  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.078209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.082591  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:28.082666  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:28.127339  445201 cri.go:89] found id: ""
	I1026 09:16:28.127360  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.127369  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:28.127375  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:28.127432  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:28.162170  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:28.162189  445201 cri.go:89] found id: "bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.162194  445201 cri.go:89] found id: ""
	I1026 09:16:28.162202  445201 logs.go:282] 2 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07]
	I1026 09:16:28.162261  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.166628  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:28.175901  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:28.175972  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:28.204244  445201 cri.go:89] found id: ""
	I1026 09:16:28.204272  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.204281  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:28.204287  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:28.204351  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:28.235132  445201 cri.go:89] found id: ""
	I1026 09:16:28.235153  445201 logs.go:282] 0 containers: []
	W1026 09:16:28.235162  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:28.235171  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:28.235182  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:28.414267  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:28.414344  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:28.447217  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:28.447250  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:28.578852  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:28.578925  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:28.650133  445201 logs.go:123] Gathering logs for kube-controller-manager [bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07] ...
	I1026 09:16:28.650171  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc742ddfc8580f1a07df07ce1af7628e3073d1c05dd28a681bf0a0c0b0037b07"
	I1026 09:16:28.679124  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:28.679149  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:28.785166  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:28.785258  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:28.823935  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:28.823966  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:28.910201  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:28.910223  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:28.910236  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:28.953498  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:28.953549  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:28.986007  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:28.986032  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:27.848444  455005 out.go:252] * Updating the running docker "pause-827956" container ...
	I1026 09:16:27.848480  455005 machine.go:93] provisionDockerMachine start ...
	I1026 09:16:27.848575  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:27.865944  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:27.866271  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:27.866316  455005 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:16:28.027090  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-827956
	
	I1026 09:16:28.027175  455005 ubuntu.go:182] provisioning hostname "pause-827956"
	I1026 09:16:28.027282  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.059041  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:28.059356  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:28.059368  455005 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-827956 && echo "pause-827956" | sudo tee /etc/hostname
	I1026 09:16:28.235612  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-827956
	
	I1026 09:16:28.235700  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.262981  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:28.263342  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:28.263374  455005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-827956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-827956/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-827956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:16:28.423172  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:16:28.423196  455005 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:16:28.423233  455005 ubuntu.go:190] setting up certificates
	I1026 09:16:28.423244  455005 provision.go:84] configureAuth start
	I1026 09:16:28.423317  455005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-827956
	I1026 09:16:28.448828  455005 provision.go:143] copyHostCerts
	I1026 09:16:28.448896  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:16:28.448912  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:16:28.450333  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:16:28.450486  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:16:28.450494  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:16:28.450529  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:16:28.450589  455005 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:16:28.450594  455005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:16:28.450617  455005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:16:28.450671  455005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.pause-827956 san=[127.0.0.1 192.168.85.2 localhost minikube pause-827956]
	I1026 09:16:28.874222  455005 provision.go:177] copyRemoteCerts
	I1026 09:16:28.874331  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:16:28.874414  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:28.893037  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:29.005495  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:16:29.026154  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 09:16:29.047452  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:16:29.065085  455005 provision.go:87] duration metric: took 641.813536ms to configureAuth
	I1026 09:16:29.065156  455005 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:16:29.065428  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:29.065590  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:29.083509  455005 main.go:141] libmachine: Using SSH client type: native
	I1026 09:16:29.083817  455005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1026 09:16:29.083836  455005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:16:31.528088  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:31.538411  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:31.538477  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:31.564156  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:31.564176  445201 cri.go:89] found id: ""
	I1026 09:16:31.564184  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:31.564240  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.567923  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:31.567989  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:31.597656  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:31.597675  445201 cri.go:89] found id: ""
	I1026 09:16:31.597683  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:31.597736  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.601689  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:31.601768  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:31.629019  445201 cri.go:89] found id: ""
	I1026 09:16:31.629044  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.629053  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:31.629060  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:31.629126  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:31.656015  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:31.656036  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.656041  445201 cri.go:89] found id: ""
	I1026 09:16:31.656048  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:31.656102  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.659825  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.663471  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:31.663540  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:31.689125  445201 cri.go:89] found id: ""
	I1026 09:16:31.689152  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.689160  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:31.689167  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:31.689294  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:31.718149  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.718169  445201 cri.go:89] found id: ""
	I1026 09:16:31.718177  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:31.718228  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:31.721878  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:31.721959  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:31.749071  445201 cri.go:89] found id: ""
	I1026 09:16:31.749098  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.749108  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:31.749115  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:31.749242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:31.776036  445201 cri.go:89] found id: ""
	I1026 09:16:31.776102  445201 logs.go:282] 0 containers: []
	W1026 09:16:31.776146  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:31.776181  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:31.776199  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:31.807779  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:31.807810  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:31.835831  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:31.835874  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:31.923892  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:31.923944  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:32.064335  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:32.064376  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:32.100733  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:32.100770  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:32.129916  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:32.129948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:32.146572  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:32.146612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:32.212390  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:32.212413  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:32.212440  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:32.303251  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:32.303290  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:34.854846  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:34.867774  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:34.867849  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:34.895116  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:34.895137  445201 cri.go:89] found id: ""
	I1026 09:16:34.895144  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:34.895202  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.899593  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:34.899663  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:34.928083  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:34.928102  445201 cri.go:89] found id: ""
	I1026 09:16:34.928110  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:34.928193  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:34.932132  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:34.932202  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:34.964519  445201 cri.go:89] found id: ""
	I1026 09:16:34.964541  445201 logs.go:282] 0 containers: []
	W1026 09:16:34.964550  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:34.964556  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:34.964614  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:35.003980  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.004001  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:35.004006  445201 cri.go:89] found id: ""
	I1026 09:16:35.004015  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:35.004080  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.009750  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.015183  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:35.015256  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:35.057026  445201 cri.go:89] found id: ""
	I1026 09:16:35.057052  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.057061  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:35.057067  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:35.057154  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:35.093208  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.093231  445201 cri.go:89] found id: ""
	I1026 09:16:35.093240  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:35.093328  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:35.098073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:35.098201  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:35.146763  445201 cri.go:89] found id: ""
	I1026 09:16:35.146836  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.146866  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:35.146887  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:35.146980  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:35.193198  445201 cri.go:89] found id: ""
	I1026 09:16:35.193224  445201 logs.go:282] 0 containers: []
	W1026 09:16:35.193233  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:35.193276  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:35.193295  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:35.209978  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:35.210006  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:35.270161  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:35.270197  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:35.306149  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:35.306226  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:35.404229  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:35.404265  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:34.421750  455005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:16:34.421777  455005 machine.go:96] duration metric: took 6.573288083s to provisionDockerMachine
	I1026 09:16:34.421789  455005 start.go:293] postStartSetup for "pause-827956" (driver="docker")
	I1026 09:16:34.421808  455005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:16:34.421873  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:16:34.421917  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.448082  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.554549  455005 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:16:34.557864  455005 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:16:34.557894  455005 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:16:34.557905  455005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:16:34.557981  455005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:16:34.558092  455005 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:16:34.558197  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:16:34.565595  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:16:34.584236  455005 start.go:296] duration metric: took 162.431308ms for postStartSetup
	I1026 09:16:34.584382  455005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:16:34.584449  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.601533  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.704560  455005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:16:34.709547  455005 fix.go:56] duration metric: took 6.881137424s for fixHost
	I1026 09:16:34.709573  455005 start.go:83] releasing machines lock for "pause-827956", held for 6.881196428s
	I1026 09:16:34.709645  455005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-827956
	I1026 09:16:34.727063  455005 ssh_runner.go:195] Run: cat /version.json
	I1026 09:16:34.727113  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.727113  455005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:16:34.727170  455005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-827956
	I1026 09:16:34.746193  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.748794  455005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/pause-827956/id_rsa Username:docker}
	I1026 09:16:34.846566  455005 ssh_runner.go:195] Run: systemctl --version
	I1026 09:16:34.942243  455005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:16:34.990055  455005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:16:34.995544  455005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:16:34.995638  455005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:16:35.005970  455005 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:16:35.006020  455005 start.go:495] detecting cgroup driver to use...
	I1026 09:16:35.006112  455005 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:16:35.006197  455005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:16:35.025491  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:16:35.042168  455005 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:16:35.042284  455005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:16:35.062105  455005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:16:35.079592  455005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:16:35.247261  455005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:16:35.434244  455005 docker.go:234] disabling docker service ...
	I1026 09:16:35.434314  455005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:16:35.462137  455005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:16:35.476462  455005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:16:35.661138  455005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:16:35.851498  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:16:35.866184  455005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:16:35.881141  455005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:16:35.881207  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.893379  455005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:16:35.893441  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.904393  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.915367  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.923900  455005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:16:35.941609  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.952905  455005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.962668  455005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:16:35.974177  455005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:16:35.982575  455005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:16:35.989751  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:36.124978  455005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:16:36.549120  455005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:16:36.549195  455005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:16:36.553041  455005 start.go:563] Will wait 60s for crictl version
	I1026 09:16:36.553131  455005 ssh_runner.go:195] Run: which crictl
	I1026 09:16:36.556668  455005 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:16:36.582568  455005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:16:36.582680  455005 ssh_runner.go:195] Run: crio --version
	I1026 09:16:36.616288  455005 ssh_runner.go:195] Run: crio --version
	I1026 09:16:36.693056  455005 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:16:36.696051  455005 cli_runner.go:164] Run: docker network inspect pause-827956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:16:36.720275  455005 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:16:36.725128  455005 kubeadm.go:883] updating cluster {Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:16:36.725261  455005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:16:36.725312  455005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:16:36.817079  455005 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:16:36.817100  455005 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:16:36.817153  455005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:16:36.883122  455005 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:16:36.883142  455005 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:16:36.883150  455005 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:16:36.883636  455005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-827956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:16:36.883737  455005 ssh_runner.go:195] Run: crio config
	I1026 09:16:37.019180  455005 cni.go:84] Creating CNI manager for ""
	I1026 09:16:37.019252  455005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:16:37.019287  455005 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:16:37.019345  455005 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-827956 NodeName:pause-827956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:16:37.019532  455005 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-827956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:16:37.019645  455005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:16:37.044090  455005 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:16:37.044218  455005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:16:37.055630  455005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1026 09:16:37.080014  455005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:16:37.098631  455005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 09:16:37.124359  455005 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:16:37.131103  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:37.389640  455005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:16:37.405879  455005 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956 for IP: 192.168.85.2
	I1026 09:16:37.405953  455005 certs.go:195] generating shared ca certs ...
	I1026 09:16:37.405985  455005 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:37.406183  455005 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:16:37.406256  455005 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:16:37.406291  455005 certs.go:257] generating profile certs ...
	I1026 09:16:37.406419  455005 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key
	I1026 09:16:37.406529  455005 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.key.ba406644
	I1026 09:16:37.406649  455005 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.key
	I1026 09:16:37.406833  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:16:37.406891  455005 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:16:37.406916  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:16:37.406976  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:16:37.407024  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:16:37.407081  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:16:37.407154  455005 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:16:37.407804  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:16:37.448838  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:16:37.479815  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:16:37.508898  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:16:37.541412  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 09:16:37.578685  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:16:37.613557  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:16:37.636814  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:16:37.660729  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:16:37.678341  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:16:37.696053  455005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:16:37.720352  455005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:16:37.741552  455005 ssh_runner.go:195] Run: openssl version
	I1026 09:16:37.763132  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:16:37.779261  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.786015  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.786086  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:16:37.858955  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:16:37.867699  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:16:37.876635  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.880917  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.880982  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:16:37.924984  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:16:37.933896  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:16:37.943657  455005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:16:37.951008  455005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:16:37.951084  455005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:16:38.001805  455005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:16:38.013566  455005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:16:38.018523  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:16:38.065363  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:16:38.110952  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:16:38.153764  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:16:38.196997  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:16:38.246379  455005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:16:38.294647  455005 kubeadm.go:400] StartCluster: {Name:pause-827956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-827956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:16:38.294787  455005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:16:38.294858  455005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:16:38.334769  455005 cri.go:89] found id: "abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0"
	I1026 09:16:38.334813  455005 cri.go:89] found id: "d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd"
	I1026 09:16:38.334820  455005 cri.go:89] found id: "ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7"
	I1026 09:16:38.334824  455005 cri.go:89] found id: "e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a"
	I1026 09:16:38.334827  455005 cri.go:89] found id: "ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656"
	I1026 09:16:38.334831  455005 cri.go:89] found id: "6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd"
	I1026 09:16:38.334836  455005 cri.go:89] found id: "12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2"
	I1026 09:16:38.334839  455005 cri.go:89] found id: "5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	I1026 09:16:38.334842  455005 cri.go:89] found id: "17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e"
	I1026 09:16:38.334851  455005 cri.go:89] found id: "e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9"
	I1026 09:16:38.334866  455005 cri.go:89] found id: "89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde"
	I1026 09:16:38.334870  455005 cri.go:89] found id: "4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	I1026 09:16:38.334873  455005 cri.go:89] found id: "37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d"
	I1026 09:16:38.334878  455005 cri.go:89] found id: "6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a"
	I1026 09:16:38.334888  455005 cri.go:89] found id: ""
	I1026 09:16:38.334970  455005 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:16:38.348923  455005 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:16:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:16:38.349012  455005 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:16:38.358687  455005 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:16:38.358757  455005 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:16:38.358809  455005 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:16:38.367239  455005 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:16:38.368018  455005 kubeconfig.go:125] found "pause-827956" server: "https://192.168.85.2:8443"
	I1026 09:16:38.369012  455005 kapi.go:59] client config for pause-827956: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:16:38.369675  455005 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 09:16:38.369695  455005 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 09:16:38.369701  455005 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 09:16:38.369746  455005 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 09:16:38.369752  455005 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 09:16:38.370139  455005 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:16:38.386286  455005 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:16:38.386323  455005 kubeadm.go:601] duration metric: took 27.557917ms to restartPrimaryControlPlane
	I1026 09:16:38.386332  455005 kubeadm.go:402] duration metric: took 91.69617ms to StartCluster
	I1026 09:16:38.386357  455005 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:38.386431  455005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:16:38.387467  455005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:16:38.387735  455005 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:16:38.388066  455005 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:16:38.388432  455005 config.go:182] Loaded profile config "pause-827956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:16:38.393664  455005 out.go:179] * Enabled addons: 
	I1026 09:16:38.393728  455005 out.go:179] * Verifying Kubernetes components...
	I1026 09:16:35.461433  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:35.461514  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:35.632313  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:35.632388  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 09:16:35.704335  445201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 09:16:35.710695  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:35.710727  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:35.710741  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	W1026 09:16:35.829972  445201 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.830049  445201 retry.go:31] will retry after 24.298648909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 09:16:35.900420  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:35.900498  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:35.943003  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:35.943071  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.478842  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:38.499470  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:38.499549  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:38.561966  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:38.561991  445201 cri.go:89] found id: ""
	I1026 09:16:38.562000  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:38.562054  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.565941  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:38.566019  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:38.618045  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:38.618070  445201 cri.go:89] found id: ""
	I1026 09:16:38.618078  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:38.618133  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.622308  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:38.622382  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:38.685130  445201 cri.go:89] found id: ""
	I1026 09:16:38.685158  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.685167  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:38.685173  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:38.685237  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:38.741160  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:38.741185  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:38.741190  445201 cri.go:89] found id: ""
	I1026 09:16:38.741197  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:38.741253  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.745509  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.749859  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:38.749939  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:38.787906  445201 cri.go:89] found id: ""
	I1026 09:16:38.787934  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.787943  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:38.787949  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:38.788007  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:38.843118  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:38.843144  445201 cri.go:89] found id: ""
	I1026 09:16:38.843153  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:38.843209  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:38.851429  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:38.851513  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:38.911991  445201 cri.go:89] found id: ""
	I1026 09:16:38.912018  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.912027  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:38.912033  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:38.912093  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:38.964607  445201 cri.go:89] found id: ""
	I1026 09:16:38.964634  445201 logs.go:282] 0 containers: []
	W1026 09:16:38.964643  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:38.964657  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:38.964668  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:39.099227  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:39.099252  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:39.099266  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:39.257625  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:39.257710  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:39.323941  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:39.323981  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:39.400574  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:39.400612  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:39.477668  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:39.477707  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:39.647471  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:39.647511  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:39.687069  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:39.687105  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:39.730533  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:39.730615  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:39.781743  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:39.781825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:38.396509  455005 addons.go:514] duration metric: took 8.43828ms for enable addons: enabled=[]
	I1026 09:16:38.396608  455005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:16:38.753467  455005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:16:38.777771  455005 node_ready.go:35] waiting up to 6m0s for node "pause-827956" to be "Ready" ...
	I1026 09:16:42.130375  455005 node_ready.go:49] node "pause-827956" is "Ready"
	I1026 09:16:42.130411  455005 node_ready.go:38] duration metric: took 3.352604487s for node "pause-827956" to be "Ready" ...
	I1026 09:16:42.130426  455005 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:16:42.130495  455005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:42.149355  455005 api_server.go:72] duration metric: took 3.761582632s to wait for apiserver process to appear ...
	I1026 09:16:42.149385  455005 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:16:42.149406  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:42.218608  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:42.218642  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:42.397606  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:42.417937  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:42.418016  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:42.495359  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:42.495378  445201 cri.go:89] found id: ""
	I1026 09:16:42.495386  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:42.495440  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.501607  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:42.501703  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:42.551870  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:42.551932  445201 cri.go:89] found id: ""
	I1026 09:16:42.551954  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:42.552046  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.556449  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:42.556566  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:42.586577  445201 cri.go:89] found id: ""
	I1026 09:16:42.586646  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.586678  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:42.586703  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:42.586820  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:42.617869  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:42.617892  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:42.617897  445201 cri.go:89] found id: ""
	I1026 09:16:42.617915  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:42.617970  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.625871  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.630165  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:42.630242  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:42.666758  445201 cri.go:89] found id: ""
	I1026 09:16:42.666834  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.666858  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:42.666880  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:42.666965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:42.701859  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:42.701926  445201 cri.go:89] found id: ""
	I1026 09:16:42.701948  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:42.702031  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:42.706073  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:42.706198  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:42.734210  445201 cri.go:89] found id: ""
	I1026 09:16:42.734238  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.734247  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:42.734253  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:42.734316  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:42.761520  445201 cri.go:89] found id: ""
	I1026 09:16:42.761543  445201 logs.go:282] 0 containers: []
	W1026 09:16:42.761561  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:42.761577  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:42.761588  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:42.852029  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:42.852074  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:42.900883  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:42.900926  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:42.919113  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:42.919143  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:43.013913  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:43.013936  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:43.013948  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:43.112560  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:43.112603  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:43.165342  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:43.165375  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:43.197760  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:43.197792  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:43.240362  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:43.240393  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:43.395807  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:43.395853  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:42.650177  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:42.660274  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:42.660302  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:43.149472  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:43.163458  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:16:43.163487  455005 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:16:43.650188  455005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:16:43.658431  455005 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 09:16:43.659487  455005 api_server.go:141] control plane version: v1.34.1
	I1026 09:16:43.659523  455005 api_server.go:131] duration metric: took 1.510131165s to wait for apiserver health ...
	I1026 09:16:43.659532  455005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:16:43.663384  455005 system_pods.go:59] 7 kube-system pods found
	I1026 09:16:43.663426  455005 system_pods.go:61] "coredns-66bc5c9577-55zjj" [7fc468bf-1986-4172-8eba-98945beb861a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:16:43.663436  455005 system_pods.go:61] "etcd-pause-827956" [03cc5c30-6b85-4b59-ba1d-75d4036c44a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:16:43.663443  455005 system_pods.go:61] "kindnet-xws2g" [07c1368d-3ddb-492b-90ea-8c001e45fbe5] Running
	I1026 09:16:43.663451  455005 system_pods.go:61] "kube-apiserver-pause-827956" [33092d00-f827-4599-9a46-880483bf6300] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:16:43.663499  455005 system_pods.go:61] "kube-controller-manager-pause-827956" [bc582be5-af80-46ec-a948-b5b66b211dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:16:43.663515  455005 system_pods.go:61] "kube-proxy-256pg" [0f084e0d-f221-4e24-ab7a-5ae1cb414b56] Running
	I1026 09:16:43.663522  455005 system_pods.go:61] "kube-scheduler-pause-827956" [b3e4894b-19a5-4506-9cfe-c7ae807e139e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:16:43.663529  455005 system_pods.go:74] duration metric: took 3.990497ms to wait for pod list to return data ...
	I1026 09:16:43.663543  455005 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:16:43.666244  455005 default_sa.go:45] found service account: "default"
	I1026 09:16:43.666295  455005 default_sa.go:55] duration metric: took 2.7456ms for default service account to be created ...
	I1026 09:16:43.666307  455005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:16:43.669581  455005 system_pods.go:86] 7 kube-system pods found
	I1026 09:16:43.669619  455005 system_pods.go:89] "coredns-66bc5c9577-55zjj" [7fc468bf-1986-4172-8eba-98945beb861a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:16:43.669633  455005 system_pods.go:89] "etcd-pause-827956" [03cc5c30-6b85-4b59-ba1d-75d4036c44a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:16:43.669643  455005 system_pods.go:89] "kindnet-xws2g" [07c1368d-3ddb-492b-90ea-8c001e45fbe5] Running
	I1026 09:16:43.669654  455005 system_pods.go:89] "kube-apiserver-pause-827956" [33092d00-f827-4599-9a46-880483bf6300] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:16:43.669667  455005 system_pods.go:89] "kube-controller-manager-pause-827956" [bc582be5-af80-46ec-a948-b5b66b211dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:16:43.669677  455005 system_pods.go:89] "kube-proxy-256pg" [0f084e0d-f221-4e24-ab7a-5ae1cb414b56] Running
	I1026 09:16:43.669684  455005 system_pods.go:89] "kube-scheduler-pause-827956" [b3e4894b-19a5-4506-9cfe-c7ae807e139e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:16:43.669692  455005 system_pods.go:126] duration metric: took 3.378789ms to wait for k8s-apps to be running ...
	I1026 09:16:43.669708  455005 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:16:43.669786  455005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:16:43.683216  455005 system_svc.go:56] duration metric: took 13.497458ms WaitForService to wait for kubelet
	I1026 09:16:43.683258  455005 kubeadm.go:586] duration metric: took 5.295490772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:16:43.683278  455005 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:16:43.686311  455005 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:16:43.686339  455005 node_conditions.go:123] node cpu capacity is 2
	I1026 09:16:43.686352  455005 node_conditions.go:105] duration metric: took 3.067738ms to run NodePressure ...
	I1026 09:16:43.686364  455005 start.go:241] waiting for startup goroutines ...
	I1026 09:16:43.686371  455005 start.go:246] waiting for cluster config update ...
	I1026 09:16:43.686380  455005 start.go:255] writing updated cluster config ...
	I1026 09:16:43.686756  455005 ssh_runner.go:195] Run: rm -f paused
	I1026 09:16:43.690193  455005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:16:43.690989  455005 kapi.go:59] client config for pause-827956: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/profiles/pause-827956/client.key", CAFile:"/home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 09:16:43.693929  455005 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-55zjj" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:16:45.702181  455005 pod_ready.go:104] pod "coredns-66bc5c9577-55zjj" is not "Ready", error: <nil>
	I1026 09:16:45.947614  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:45.958234  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:45.958307  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:45.983853  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:45.983876  445201 cri.go:89] found id: ""
	I1026 09:16:45.983884  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:45.983938  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:45.987866  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:45.987940  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:46.014318  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.014341  445201 cri.go:89] found id: ""
	I1026 09:16:46.014350  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:46.014411  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.018353  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:46.018426  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:46.048052  445201 cri.go:89] found id: ""
	I1026 09:16:46.048078  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.048086  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:46.048093  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:46.048200  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:46.076123  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.076197  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.076208  445201 cri.go:89] found id: ""
	I1026 09:16:46.076223  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:46.076283  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.080561  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.084247  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:46.084318  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:46.115050  445201 cri.go:89] found id: ""
	I1026 09:16:46.115076  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.115085  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:46.115104  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:46.115163  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:46.142084  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.142107  445201 cri.go:89] found id: ""
	I1026 09:16:46.142115  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:46.142194  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:46.146126  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:46.146204  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:46.173893  445201 cri.go:89] found id: ""
	I1026 09:16:46.173928  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.173939  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:46.173945  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:46.174009  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:46.207007  445201 cri.go:89] found id: ""
	I1026 09:16:46.207074  445201 logs.go:282] 0 containers: []
	W1026 09:16:46.207098  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:46.207121  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:46.207146  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:46.281196  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:46.281257  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:46.281286  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:46.365641  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:46.365679  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:46.395828  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:46.395859  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:46.429785  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:46.429825  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:46.450173  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:46.450204  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:46.493277  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:46.493310  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:46.552581  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:46.552619  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:46.588862  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:46.588892  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:46.679688  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:46.679719  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.344869  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:49.356555  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:49.356623  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:49.389547  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.389571  445201 cri.go:89] found id: ""
	I1026 09:16:49.389580  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:49.389639  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.393152  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:49.393236  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:49.419217  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:49.419240  445201 cri.go:89] found id: ""
	I1026 09:16:49.419249  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:49.419320  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.423239  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:49.423309  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:49.453234  445201 cri.go:89] found id: ""
	I1026 09:16:49.453257  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.453266  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:49.453272  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:49.453335  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:49.483822  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:49.483846  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:49.483851  445201 cri.go:89] found id: ""
	I1026 09:16:49.483859  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:49.483912  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.487530  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.491109  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:49.491181  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:49.520353  445201 cri.go:89] found id: ""
	I1026 09:16:49.520374  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.520383  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:49.520389  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:49.520448  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:49.546473  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:49.546502  445201 cri.go:89] found id: ""
	I1026 09:16:49.546510  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:49.546569  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:49.550263  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:49.550336  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:49.583559  445201 cri.go:89] found id: ""
	I1026 09:16:49.583588  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.583597  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:49.583604  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:49.583661  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:49.608811  445201 cri.go:89] found id: ""
	I1026 09:16:49.608834  445201 logs.go:282] 0 containers: []
	W1026 09:16:49.608842  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:49.608856  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:49.608867  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:49.641306  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:49.641330  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:49.786127  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:49.786165  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:49.892036  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:49.892055  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:49.892068  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:49.995466  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:49.995550  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:50.055401  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:50.055506  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:50.086674  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:50.086794  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:50.174213  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:50.174252  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:50.195967  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:50.196051  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:50.250465  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:50.250500  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	W1026 09:16:48.199984  455005 pod_ready.go:104] pod "coredns-66bc5c9577-55zjj" is not "Ready", error: <nil>
	I1026 09:16:49.199566  455005 pod_ready.go:94] pod "coredns-66bc5c9577-55zjj" is "Ready"
	I1026 09:16:49.199597  455005 pod_ready.go:86] duration metric: took 5.50564435s for pod "coredns-66bc5c9577-55zjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.202343  455005 pod_ready.go:83] waiting for pod "etcd-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.207635  455005 pod_ready.go:94] pod "etcd-pause-827956" is "Ready"
	I1026 09:16:49.207670  455005 pod_ready.go:86] duration metric: took 5.303916ms for pod "etcd-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:49.210161  455005 pod_ready.go:83] waiting for pod "kube-apiserver-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.218330  455005 pod_ready.go:94] pod "kube-apiserver-pause-827956" is "Ready"
	I1026 09:16:50.218353  455005 pod_ready.go:86] duration metric: took 1.008168996s for pod "kube-apiserver-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.223635  455005 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.229706  455005 pod_ready.go:94] pod "kube-controller-manager-pause-827956" is "Ready"
	I1026 09:16:50.229730  455005 pod_ready.go:86] duration metric: took 6.072492ms for pod "kube-controller-manager-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.398044  455005 pod_ready.go:83] waiting for pod "kube-proxy-256pg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.797043  455005 pod_ready.go:94] pod "kube-proxy-256pg" is "Ready"
	I1026 09:16:50.797067  455005 pod_ready.go:86] duration metric: took 398.955902ms for pod "kube-proxy-256pg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:50.998254  455005 pod_ready.go:83] waiting for pod "kube-scheduler-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:51.397488  455005 pod_ready.go:94] pod "kube-scheduler-pause-827956" is "Ready"
	I1026 09:16:51.397519  455005 pod_ready.go:86] duration metric: took 399.194921ms for pod "kube-scheduler-pause-827956" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:16:51.397532  455005 pod_ready.go:40] duration metric: took 7.707306268s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:16:51.454178  455005 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:16:51.457385  455005 out.go:179] * Done! kubectl is now configured to use "pause-827956" cluster and "default" namespace by default
	I1026 09:16:52.778890  445201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:16:52.789937  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 09:16:52.790008  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 09:16:52.815609  445201 cri.go:89] found id: "68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	I1026 09:16:52.815631  445201 cri.go:89] found id: ""
	I1026 09:16:52.815639  445201 logs.go:282] 1 containers: [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668]
	I1026 09:16:52.815699  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.819424  445201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 09:16:52.819505  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 09:16:52.845893  445201 cri.go:89] found id: "de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:52.845918  445201 cri.go:89] found id: ""
	I1026 09:16:52.845927  445201 logs.go:282] 1 containers: [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc]
	I1026 09:16:52.845982  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.849890  445201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 09:16:52.849965  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 09:16:52.876233  445201 cri.go:89] found id: ""
	I1026 09:16:52.876258  445201 logs.go:282] 0 containers: []
	W1026 09:16:52.876267  445201 logs.go:284] No container was found matching "coredns"
	I1026 09:16:52.876274  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 09:16:52.876336  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 09:16:52.906825  445201 cri.go:89] found id: "d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:52.906846  445201 cri.go:89] found id: "a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:52.906851  445201 cri.go:89] found id: ""
	I1026 09:16:52.906858  445201 logs.go:282] 2 containers: [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629]
	I1026 09:16:52.906914  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.910552  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.913863  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 09:16:52.913934  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 09:16:52.941593  445201 cri.go:89] found id: ""
	I1026 09:16:52.941620  445201 logs.go:282] 0 containers: []
	W1026 09:16:52.941629  445201 logs.go:284] No container was found matching "kube-proxy"
	I1026 09:16:52.941635  445201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 09:16:52.941693  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 09:16:52.978764  445201 cri.go:89] found id: "8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:52.978837  445201 cri.go:89] found id: ""
	I1026 09:16:52.978859  445201 logs.go:282] 1 containers: [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef]
	I1026 09:16:52.978941  445201 ssh_runner.go:195] Run: which crictl
	I1026 09:16:52.984587  445201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 09:16:52.984647  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 09:16:53.017265  445201 cri.go:89] found id: ""
	I1026 09:16:53.017289  445201 logs.go:282] 0 containers: []
	W1026 09:16:53.017298  445201 logs.go:284] No container was found matching "kindnet"
	I1026 09:16:53.017305  445201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 09:16:53.017367  445201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 09:16:53.067882  445201 cri.go:89] found id: ""
	I1026 09:16:53.067948  445201 logs.go:282] 0 containers: []
	W1026 09:16:53.067969  445201 logs.go:284] No container was found matching "storage-provisioner"
	I1026 09:16:53.068006  445201 logs.go:123] Gathering logs for kube-scheduler [d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85] ...
	I1026 09:16:53.068037  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d400e8b9f0ae1f628bfa4354b17a9923f0024d5a01fcd9353e689da845671e85"
	I1026 09:16:53.142921  445201 logs.go:123] Gathering logs for container status ...
	I1026 09:16:53.143001  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 09:16:53.181811  445201 logs.go:123] Gathering logs for dmesg ...
	I1026 09:16:53.181841  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 09:16:53.202147  445201 logs.go:123] Gathering logs for describe nodes ...
	I1026 09:16:53.202222  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 09:16:53.297486  445201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 09:16:53.297551  445201 logs.go:123] Gathering logs for etcd [de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc] ...
	I1026 09:16:53.297577  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de32875ececb46a7b74f7a217390244e16b1e018ce0ae13acb9924ac551ca1fc"
	I1026 09:16:53.385965  445201 logs.go:123] Gathering logs for kube-scheduler [a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629] ...
	I1026 09:16:53.386003  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0bdcee41ce104153609746458a6bc5d93ab558f6a3f6fcce6e3bc9d5f27f629"
	I1026 09:16:53.425657  445201 logs.go:123] Gathering logs for kube-controller-manager [8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef] ...
	I1026 09:16:53.425681  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ab3483382024f1551173819601fdac6ce8ef30821adb181ce99f805bf070eef"
	I1026 09:16:53.489466  445201 logs.go:123] Gathering logs for CRI-O ...
	I1026 09:16:53.489496  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 09:16:53.586891  445201 logs.go:123] Gathering logs for kubelet ...
	I1026 09:16:53.586932  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 09:16:53.788719  445201 logs.go:123] Gathering logs for kube-apiserver [68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668] ...
	I1026 09:16:53.788755  445201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68f6053e321f94717a9c7c3a6e097ed60cd1257e2fa4234929fdf874c2854668"
	
	
	==> CRI-O <==
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.974151099Z" level=info msg="Created container d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd: kube-system/coredns-66bc5c9577-55zjj/coredns" id=608f73e5-09ec-4433-a963-36482eee98a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.974578296Z" level=info msg="Started container" PID=2158 containerID=e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a description=kube-system/etcd-pause-827956/etcd id=af90b639-5e3c-4f1e-a8f4-f8f7d139b7ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=c15c154dd600aa6bf61c99a50e62a2c496889bac5085f44cb7ad21a73241a191
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.975461482Z" level=info msg="Starting container: d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd" id=1b0cb93c-7460-411d-aa4c-44b51390a258 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:16:36 pause-827956 crio[2049]: time="2025-10-26T09:16:36.99680911Z" level=info msg="Started container" PID=2170 containerID=d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd description=kube-system/coredns-66bc5c9577-55zjj/coredns id=1b0cb93c-7460-411d-aa4c-44b51390a258 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dee9282bb453234d3c9f4e1ca5ac5c2c4ab90a09b504a15cedb7434704573712
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.014944842Z" level=info msg="Created container abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0: kube-system/kube-apiserver-pause-827956/kube-apiserver" id=1ca8c862-8b35-4c20-b1a2-316189ab2fbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.015848451Z" level=info msg="Starting container: abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0" id=0c93bd4f-8fad-4720-9585-acc3b217a3a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:16:37 pause-827956 crio[2049]: time="2025-10-26T09:16:37.019324915Z" level=info msg="Started container" PID=2197 containerID=abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0 description=kube-system/kube-apiserver-pause-827956/kube-apiserver id=0c93bd4f-8fad-4720-9585-acc3b217a3a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=738bcc2f62101bb50a4162e9fe8def6cedba8bebb86189ec2ff7d8b770f83192
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.233721479Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.23740931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.237446012Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.237471637Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241029743Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241066133Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.241090995Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.244192193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.24422686Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.244249991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247424158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247469746Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.247494575Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250853401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250888618Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.250911929Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.254011593Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:16:47 pause-827956 crio[2049]: time="2025-10-26T09:16:47.254046244Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	abeec7c4eb27a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   738bcc2f62101       kube-apiserver-pause-827956            kube-system
	d76caf76791ff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   dee9282bb4532       coredns-66bc5c9577-55zjj               kube-system
	ab4c4b82eb630       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   9e725f779785e       kube-proxy-256pg                       kube-system
	e83d0bbe6c148       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   c15c154dd600a       etcd-pause-827956                      kube-system
	ce03a3f9e33c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   f22039d015ac6       kindnet-xws2g                          kube-system
	6662ea14a334a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   2071a9e92c010       kube-controller-manager-pause-827956   kube-system
	12bfc7b7ec9ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   f870281ad79b0       kube-scheduler-pause-827956            kube-system
	5a42e583dd72c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago       Exited              coredns                   0                   dee9282bb4532       coredns-66bc5c9577-55zjj               kube-system
	17d2f4193961a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f22039d015ac6       kindnet-xws2g                          kube-system
	e2303b10c782a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   9e725f779785e       kube-proxy-256pg                       kube-system
	89a45b6148f73       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2071a9e92c010       kube-controller-manager-pause-827956   kube-system
	4fe133254cda6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   738bcc2f62101       kube-apiserver-pause-827956            kube-system
	37f3a1c3ea560       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c15c154dd600a       etcd-pause-827956                      kube-system
	6ae98efebf5b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   f870281ad79b0       kube-scheduler-pause-827956            kube-system
	
	
	==> coredns [5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49014 - 57664 "HINFO IN 7018778514784035254.560845710521869690. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022597217s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d76caf76791ff3d0de0721028ed80b7ae1ce7da62950cde0d84ec24c3952f2cd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48721 - 49217 "HINFO IN 6978744325466965760.3353292712821291265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030323555s
	
	
	==> describe nodes <==
	Name:               pause-827956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-827956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=pause-827956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_15_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-827956
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:16:25 +0000   Sun, 26 Oct 2025 09:16:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-827956
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ebc43747-cbfd-4a43-9763-85deb5eb87af
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-55zjj                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-827956                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-xws2g                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-827956             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-827956    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-256pg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-827956             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 72s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-827956 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-827956 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-827956 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-827956 event: Registered Node pause-827956 in Controller
	  Normal   NodeReady                32s   kubelet          Node pause-827956 status is now: NodeReady
	  Normal   RegisteredNode           12s   node-controller  Node pause-827956 event: Registered Node pause-827956 in Controller
	
	
	==> dmesg <==
	[Oct26 08:45] overlayfs: idmapped layers are currently not supported
	[Oct26 08:50] overlayfs: idmapped layers are currently not supported
	[  +3.466267] overlayfs: idmapped layers are currently not supported
	[Oct26 08:51] overlayfs: idmapped layers are currently not supported
	[Oct26 08:52] overlayfs: idmapped layers are currently not supported
	[ +49.561224] hrtimer: interrupt took 37499666 ns
	[Oct26 08:53] overlayfs: idmapped layers are currently not supported
	[Oct26 08:58] overlayfs: idmapped layers are currently not supported
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [37f3a1c3ea560b1526a75adf6c48079777ea4d0d97c6eed519a328185df0f52d] <==
	{"level":"warn","ts":"2025-10-26T09:15:33.621243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.663197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.731999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.749874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.797850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:33.855510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:15:34.099464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T09:16:29.257144Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T09:16:29.257203Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-827956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-26T09:16:29.257291Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T09:16:29.527692Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T09:16:29.529196Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.529249Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-26T09:16:29.529317Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T09:16:29.529335Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529445Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T09:16:29.529489Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529578Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T09:16:29.529596Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T09:16:29.529603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.532631Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-26T09:16:29.532714Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T09:16:29.532757Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T09:16:29.532782Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-827956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e83d0bbe6c148b6e24b2556ca0113a9f8ed34d3fab305fb67e553f87c275cf1a] <==
	{"level":"warn","ts":"2025-10-26T09:16:40.887992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.907646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.928693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.946320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.958199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.977532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:40.999244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.011525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.027477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.046924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.062054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.077248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.093579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.106987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.127790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.136867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.155156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.168523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.184249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.202364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.221147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.259263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.278320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.293409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:16:41.347165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:16:57 up  2:59,  0 user,  load average: 3.54, 3.53, 2.67
	Linux pause-827956 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17d2f4193961ad7514266eb4055c30b689db80cf0adfced7efbe03bc510a062e] <==
	I1026 09:15:44.512222       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:15:44.598986       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:15:44.599190       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:15:44.599231       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:15:44.599268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:15:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:15:44.799793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:15:44.799869       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:15:44.799902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:15:44.800870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:16:14.800023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:16:14.801224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:16:14.801233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:16:14.801327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 09:16:16.100966       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:16:16.100997       1 metrics.go:72] Registering metrics
	I1026 09:16:16.101049       1 controller.go:711] "Syncing nftables rules"
	I1026 09:16:24.801664       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:16:24.801819       1 main.go:301] handling current node
	
	
	==> kindnet [ce03a3f9e33c9a4c2959640c1149226120a0de9d26f6aa6bf6f5f5aa6f415656] <==
	I1026 09:16:37.032212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:16:37.032527       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:16:37.032649       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:16:37.032661       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:16:37.032675       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:16:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:16:37.239275       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:16:37.239363       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:16:37.239398       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:16:37.239874       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 09:16:42.241880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:16:42.241918       1 metrics.go:72] Registering metrics
	I1026 09:16:42.242006       1 controller.go:711] "Syncing nftables rules"
	I1026 09:16:47.233313       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:16:47.233397       1 main.go:301] handling current node
	I1026 09:16:57.238894       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:16:57.238934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217] <==
	W1026 09:16:29.276559       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276570       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276603       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276635       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276650       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276697       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276722       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276740       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276783       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276821       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276827       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276868       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276873       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276921       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276933       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276966       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276990       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276695       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276784       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277039       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277070       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.276440       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277112       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 09:16:29.277140       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [abeec7c4eb27ac64f0ba657b8bf44b32378b5e52e2cf076c2cc21ee07e4d37a0] <==
	I1026 09:16:42.187460       1 policy_source.go:240] refreshing policies
	I1026 09:16:42.189987       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:16:42.190029       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:16:42.190149       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:16:42.217159       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:16:42.219237       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:16:42.219379       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:16:42.232659       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:16:42.233950       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:16:42.239081       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:16:42.239167       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:16:42.239202       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:16:42.239234       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:16:42.260792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1026 09:16:42.267024       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:16:42.286909       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:16:42.293161       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 09:16:42.293684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 09:16:42.296639       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:16:42.893820       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:16:44.084499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:16:45.627037       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:16:45.677324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:16:45.826205       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:16:45.879087       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6662ea14a334a761fe4263eb7041fee3275f3fb0df623e5a284b78e1ae7013fd] <==
	I1026 09:16:45.510295       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:16:45.513558       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 09:16:45.516766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:16:45.520450       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:16:45.520539       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:16:45.520459       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:16:45.520620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:16:45.521922       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:16:45.521982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 09:16:45.521944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 09:16:45.522066       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 09:16:45.521970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 09:16:45.522118       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:16:45.525268       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:16:45.530623       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:16:45.531763       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:16:45.532987       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:16:45.536270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:16:45.541613       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:16:45.543861       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:16:45.554280       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:16:45.565683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:16:45.571313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:16:45.571404       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:16:45.571474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [89a45b6148f732a570ec8ec4b04d105f2cd8b113b8102472475a7700c85b1dde] <==
	I1026 09:15:42.646513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:15:42.652953       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:15:42.663195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:15:42.665017       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-827956" podCIDRs=["10.244.0.0/24"]
	I1026 09:15:42.665307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:15:42.673024       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:15:42.682801       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:15:42.683069       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:15:42.683134       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:15:42.683140       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 09:15:42.683369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:15:42.683510       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:15:42.683549       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:15:42.683588       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:15:42.683727       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:15:42.683782       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 09:15:42.683826       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:15:42.683739       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:15:42.688893       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:15:42.688990       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:15:42.689072       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-827956"
	I1026 09:15:42.689158       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 09:15:42.702244       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:15:42.718838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:16:27.695258       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab4c4b82eb6305b31de767eaccb7619d12fbd179a00277177a5ca1e18a63b6b7] <==
	I1026 09:16:38.702336       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:16:40.782374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:16:42.282595       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:16:42.282794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:16:42.282951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:16:42.348402       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:16:42.348556       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:16:42.353259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:16:42.353645       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:16:42.353833       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:16:42.355316       1 config.go:200] "Starting service config controller"
	I1026 09:16:42.355384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:16:42.355428       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:16:42.355456       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:16:42.355494       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:16:42.355520       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:16:42.356291       1 config.go:309] "Starting node config controller"
	I1026 09:16:42.356353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:16:42.356382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:16:42.458213       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:16:42.458387       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:16:42.458449       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e2303b10c782aa72518ad94d4990fb5b24a90a8a4028212f378f00fd174415d9] <==
	I1026 09:15:44.503151       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:15:44.656277       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:15:44.756704       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:15:44.756743       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:15:44.756814       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:15:44.778572       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:15:44.778634       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:15:44.782526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:15:44.782898       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:15:44.783136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:15:44.784616       1 config.go:200] "Starting service config controller"
	I1026 09:15:44.784635       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:15:44.784655       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:15:44.784660       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:15:44.784671       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:15:44.784675       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:15:44.785591       1 config.go:309] "Starting node config controller"
	I1026 09:15:44.785615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:15:44.785622       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:15:44.885402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:15:44.885420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:15:44.885462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12bfc7b7ec9ef3fdca39d3e518d0f61f14bf8266039447a12a31812d2d9479e2] <==
	I1026 09:16:40.061671       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:16:42.026302       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 09:16:42.026349       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 09:16:42.026360       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:16:42.026394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:16:42.181204       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:16:42.181245       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:16:42.183604       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:42.183656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:42.186861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:16:42.186995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:16:42.286897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6ae98efebf5b3633ffbdb48481250964d2624f7e5aad505b82a63cb22724c71a] <==
	E1026 09:15:35.781105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:15:35.781597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:15:35.782963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:15:35.783040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:15:36.653482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:15:36.679105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:15:36.689993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:15:36.691208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:15:36.702033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:15:36.724429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:15:36.725846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:15:36.769694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:15:36.819556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:15:36.877464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:15:36.881016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:15:36.916805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:15:36.998912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:15:37.019022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:15:40.228984       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:29.257812       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 09:16:29.257963       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 09:16:29.257977       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 09:16:29.257998       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:16:29.258161       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 09:16:29.258200       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.674219    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: I1026 09:16:36.709678    1297 scope.go:117] "RemoveContainer" containerID="5a42e583dd72ccb8356fea2247987200d4f78d355e62a129a97be0e8bb743c8e"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710327    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b2d3d0c239f2715756705b971b37c069" pod="kube-system/kube-scheduler-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710523    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="63e460761495c032591e443b4caff49c" pod="kube-system/kube-apiserver-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710703    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a305a42dfd59c6c950ca91762686cb4" pod="kube-system/etcd-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.710891    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.711056    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-256pg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.713962    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.714166    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55zjj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7fc468bf-1986-4172-8eba-98945beb861a" pod="kube-system/coredns-66bc5c9577-55zjj"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: I1026 09:16:36.744195    1297 scope.go:117] "RemoveContainer" containerID="4fe133254cda6122846d35195d7d064f52d4e719456917c8a08884111835e217"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744474    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a305a42dfd59c6c950ca91762686cb4" pod="kube-system/etcd-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744700    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.744874    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-256pg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745025    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xws2g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07c1368d-3ddb-492b-90ea-8c001e45fbe5" pod="kube-system/kindnet-xws2g"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745167    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55zjj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7fc468bf-1986-4172-8eba-98945beb861a" pod="kube-system/coredns-66bc5c9577-55zjj"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745320    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b2d3d0c239f2715756705b971b37c069" pod="kube-system/kube-scheduler-pause-827956"
	Oct 26 09:16:36 pause-827956 kubelet[1297]: E1026 09:16:36.745466    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-827956\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="63e460761495c032591e443b4caff49c" pod="kube-system/kube-apiserver-pause-827956"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.052319    1297 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-827956\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.052991    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-827956\" is forbidden: User \"system:node:pause-827956\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" podUID="24d6f1fed89c10ab427f81b5a7f4af90" pod="kube-system/kube-controller-manager-pause-827956"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.053333    1297 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-827956\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 26 09:16:42 pause-827956 kubelet[1297]: E1026 09:16:42.120882    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-256pg\" is forbidden: User \"system:node:pause-827956\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-827956' and this object" podUID="0f084e0d-f221-4e24-ab7a-5ae1cb414b56" pod="kube-system/kube-proxy-256pg"
	Oct 26 09:16:48 pause-827956 kubelet[1297]: W1026 09:16:48.706760    1297 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 26 09:16:51 pause-827956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:16:51 pause-827956 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:16:51 pause-827956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-827956 -n pause-827956
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-827956 -n pause-827956: exit status 2 (375.78426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-827956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.643862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:23:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-167519 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-167519 describe deploy/metrics-server -n kube-system: exit status 1 (90.861562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-167519 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-167519
helpers_test.go:243: (dbg) docker inspect old-k8s-version-167519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	        "Created": "2025-10-26T09:22:22.22701342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:22:22.296401797Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hosts",
	        "LogPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2-json.log",
	        "Name": "/old-k8s-version-167519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-167519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-167519",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	                "LowerDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-167519",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-167519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-167519",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d1dfd5db2dd4b414aadac479998731a896fde5af9d9f6fa7003f26be700ef8b4",
	            "SandboxKey": "/var/run/docker/netns/d1dfd5db2dd4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-167519": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:72:0a:35:e2:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ece1bd65f7fecf7ce45d18dcdba0500d91ebe98a9871736d6b28c081ea483677",
	                    "EndpointID": "6359f259a8a14d2473fab8d26ea1717e82afd312169939f8c219a4f6b511e159",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-167519",
	                        "f43cbb714de4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25: (1.202453987s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-796399 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo containerd config dump                                                                                                                                                                                                  │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo crio config                                                                                                                                                                                                             │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ delete  │ -p cilium-796399                                                                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:17 UTC │
	│ start   │ -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:18 UTC │
	│ delete  │ -p force-systemd-env-003748                                                                                                                                                                                                                   │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:18 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:22:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:22:29.550104  483549 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:22:29.550317  483549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:22:29.550324  483549 out.go:374] Setting ErrFile to fd 2...
	I1026 09:22:29.550328  483549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:22:29.550588  483549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:22:29.551036  483549 out.go:368] Setting JSON to false
	I1026 09:22:29.552178  483549 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11100,"bootTime":1761459450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:22:29.552254  483549 start.go:141] virtualization:  
	I1026 09:22:29.558164  483549 out.go:179] * [default-k8s-diff-port-289159] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:22:29.561878  483549 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:22:29.561812  483549 notify.go:220] Checking for updates...
	I1026 09:22:29.568425  483549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:22:29.572944  483549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:22:29.576170  483549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:22:29.579426  483549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:22:29.582578  483549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:22:29.586244  483549 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:22:29.586358  483549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:22:29.628631  483549 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:22:29.628757  483549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:22:29.721427  483549 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:22:29.711312817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:22:29.721535  483549 docker.go:318] overlay module found
	I1026 09:22:29.725146  483549 out.go:179] * Using the docker driver based on user configuration
	I1026 09:22:29.728190  483549 start.go:305] selected driver: docker
	I1026 09:22:29.728217  483549 start.go:925] validating driver "docker" against <nil>
	I1026 09:22:29.728232  483549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:22:29.729019  483549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:22:29.822829  483549 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:22:29.810598338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:22:29.823006  483549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:22:29.823230  483549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:22:29.826248  483549 out.go:179] * Using Docker driver with root privileges
	I1026 09:22:29.829199  483549 cni.go:84] Creating CNI manager for ""
	I1026 09:22:29.829285  483549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:22:29.829303  483549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:22:29.829383  483549 start.go:349] cluster config:
	{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:22:29.832609  483549 out.go:179] * Starting "default-k8s-diff-port-289159" primary control-plane node in "default-k8s-diff-port-289159" cluster
	I1026 09:22:29.835449  483549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:22:29.838497  483549 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:22:29.841291  483549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:22:29.841347  483549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:22:29.841356  483549 cache.go:58] Caching tarball of preloaded images
	I1026 09:22:29.841428  483549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:22:29.841708  483549 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:22:29.841740  483549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:22:29.841854  483549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:22:29.841873  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json: {Name:mk5a91e2126d1df931dc91592239e539dd956dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:29.867579  483549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:22:29.867598  483549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:22:29.867610  483549 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:22:29.867632  483549 start.go:360] acquireMachinesLock for default-k8s-diff-port-289159: {Name:mk7eb4122b0c4e83c8a2504ee91491be3273f817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:22:29.867731  483549 start.go:364] duration metric: took 83.595µs to acquireMachinesLock for "default-k8s-diff-port-289159"
	I1026 09:22:29.867766  483549 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:22:29.867833  483549 start.go:125] createHost starting for "" (driver="docker")
	I1026 09:22:27.258952  481228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-167519
	
	I1026 09:22:27.258983  481228 ubuntu.go:182] provisioning hostname "old-k8s-version-167519"
	I1026 09:22:27.259046  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:27.282544  481228 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:27.282944  481228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1026 09:22:27.282960  481228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-167519 && echo "old-k8s-version-167519" | sudo tee /etc/hostname
	I1026 09:22:27.473736  481228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-167519
	
	I1026 09:22:27.473835  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:27.508945  481228 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:27.509262  481228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1026 09:22:27.509279  481228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-167519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-167519/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-167519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:22:27.679105  481228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:22:27.679133  481228 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:22:27.679165  481228 ubuntu.go:190] setting up certificates
	I1026 09:22:27.679174  481228 provision.go:84] configureAuth start
	I1026 09:22:27.679236  481228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:22:27.698963  481228 provision.go:143] copyHostCerts
	I1026 09:22:27.699034  481228 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:22:27.699048  481228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:22:27.699129  481228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:22:27.699244  481228 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:22:27.699255  481228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:22:27.699284  481228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:22:27.699341  481228 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:22:27.699349  481228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:22:27.699372  481228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:22:27.699418  481228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-167519 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-167519]
	I1026 09:22:28.089939  481228 provision.go:177] copyRemoteCerts
	I1026 09:22:28.090015  481228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:22:28.090058  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:28.111392  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:22:28.215132  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 09:22:28.234450  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:22:28.253559  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:22:28.272318  481228 provision.go:87] duration metric: took 593.129748ms to configureAuth
	I1026 09:22:28.272350  481228 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:22:28.272533  481228 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:22:28.272638  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:28.290254  481228 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:28.290557  481228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33420 <nil> <nil>}
	I1026 09:22:28.290573  481228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:22:28.610156  481228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:22:28.610182  481228 machine.go:96] duration metric: took 4.538871544s to provisionDockerMachine
	I1026 09:22:28.610192  481228 client.go:171] duration metric: took 13.33998442s to LocalClient.Create
	I1026 09:22:28.610204  481228 start.go:167] duration metric: took 13.340069385s to libmachine.API.Create "old-k8s-version-167519"
	I1026 09:22:28.610211  481228 start.go:293] postStartSetup for "old-k8s-version-167519" (driver="docker")
	I1026 09:22:28.610221  481228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:22:28.610303  481228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:22:28.610345  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:28.632461  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:22:28.785360  481228 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:22:28.789780  481228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:22:28.789809  481228 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:22:28.789821  481228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:22:28.789880  481228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:22:28.789965  481228 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:22:28.790071  481228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:22:28.798690  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:22:28.855579  481228 start.go:296] duration metric: took 245.351747ms for postStartSetup
	I1026 09:22:28.855978  481228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:22:28.895383  481228 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/config.json ...
	I1026 09:22:28.895675  481228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:22:28.895721  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:28.926651  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:22:29.043736  481228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:22:29.048852  481228 start.go:128] duration metric: took 13.782413101s to createHost
	I1026 09:22:29.048873  481228 start.go:83] releasing machines lock for "old-k8s-version-167519", held for 13.782565743s
	I1026 09:22:29.048970  481228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:22:29.068390  481228 ssh_runner.go:195] Run: cat /version.json
	I1026 09:22:29.068463  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:29.068687  481228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:22:29.068745  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:22:29.100390  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:22:29.110076  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:22:29.219085  481228 ssh_runner.go:195] Run: systemctl --version
	I1026 09:22:29.322863  481228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:22:29.407821  481228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:22:29.412815  481228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:22:29.412894  481228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:22:29.461573  481228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:22:29.461625  481228 start.go:495] detecting cgroup driver to use...
	I1026 09:22:29.461657  481228 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:22:29.461714  481228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:22:29.485215  481228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:22:29.500703  481228 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:22:29.500764  481228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:22:29.519734  481228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:22:29.539345  481228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:22:29.702496  481228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:22:29.888996  481228 docker.go:234] disabling docker service ...
	I1026 09:22:29.889091  481228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:22:29.924516  481228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:22:29.947680  481228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:22:30.169993  481228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:22:30.324649  481228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:22:30.342337  481228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:22:30.358891  481228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 09:22:30.358952  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.375159  481228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:22:30.375227  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.390374  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.400257  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.409358  481228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:22:30.432240  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.441312  481228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.458287  481228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:30.471923  481228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:22:30.481580  481228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:22:30.490036  481228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:22:30.654691  481228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:22:30.849165  481228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:22:30.849231  481228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:22:30.855144  481228 start.go:563] Will wait 60s for crictl version
	I1026 09:22:30.855205  481228 ssh_runner.go:195] Run: which crictl
	I1026 09:22:30.859353  481228 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:22:30.903240  481228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:22:30.903320  481228 ssh_runner.go:195] Run: crio --version
	I1026 09:22:30.935600  481228 ssh_runner.go:195] Run: crio --version
	I1026 09:22:30.981437  481228 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 09:22:30.984450  481228 cli_runner.go:164] Run: docker network inspect old-k8s-version-167519 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:22:31.009193  481228 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:22:31.013731  481228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:22:31.023891  481228 kubeadm.go:883] updating cluster {Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:22:31.024000  481228 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 09:22:31.024073  481228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:22:31.064894  481228 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:22:31.064914  481228 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:22:31.064974  481228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:22:31.093579  481228 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:22:31.093653  481228 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:22:31.093676  481228 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1026 09:22:31.093809  481228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-167519 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:22:31.093932  481228 ssh_runner.go:195] Run: crio config
	I1026 09:22:31.178057  481228 cni.go:84] Creating CNI manager for ""
	I1026 09:22:31.178128  481228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:22:31.178161  481228 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:22:31.178218  481228 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-167519 NodeName:old-k8s-version-167519 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:22:31.178413  481228 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-167519"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:22:31.178527  481228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 09:22:31.186756  481228 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:22:31.186874  481228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:22:31.194870  481228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 09:22:31.208386  481228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:22:31.221962  481228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1026 09:22:31.235650  481228 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:22:31.239878  481228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:22:31.249807  481228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:22:31.391233  481228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:22:31.407627  481228 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519 for IP: 192.168.76.2
	I1026 09:22:31.407702  481228 certs.go:195] generating shared ca certs ...
	I1026 09:22:31.407742  481228 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:31.407920  481228 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:22:31.408000  481228 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:22:31.408045  481228 certs.go:257] generating profile certs ...
	I1026 09:22:31.408141  481228 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.key
	I1026 09:22:31.408183  481228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt with IP's: []
	I1026 09:22:32.112557  481228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt ...
	I1026 09:22:32.112636  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: {Name:mk725923ba243ddb4c8188a27dc6e0dc4a145b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:32.112846  481228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.key ...
	I1026 09:22:32.112887  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.key: {Name:mkdb3b4b7b907301d9fcde7e2e48a9bdc2247552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:32.113027  481228 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key.73d1f48f
	I1026 09:22:32.113068  481228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt.73d1f48f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 09:22:32.415883  481228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt.73d1f48f ...
	I1026 09:22:32.415958  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt.73d1f48f: {Name:mk68216ab345bb2d9b2cc3cc2779ae5db9185605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:32.416161  481228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key.73d1f48f ...
	I1026 09:22:32.416211  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key.73d1f48f: {Name:mk0b963279e01560c1774c19200daebc91ef86af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:32.416354  481228 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt.73d1f48f -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt
	I1026 09:22:32.416489  481228 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key.73d1f48f -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key
	I1026 09:22:32.416579  481228 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key
	I1026 09:22:32.416631  481228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.crt with IP's: []
	I1026 09:22:33.171510  481228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.crt ...
	I1026 09:22:33.171588  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.crt: {Name:mkb8d27a0bb0368f997f1ffb9ae983bcc9845cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:33.171835  481228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key ...
	I1026 09:22:33.171885  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key: {Name:mk4fdfc7912f740ae7c9169d39a0c2dbc5c209a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:33.172194  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:22:33.172317  481228 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:22:33.172333  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:22:33.172371  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:22:33.172395  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:22:33.172422  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:22:33.172471  481228 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:22:33.173055  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:22:33.201224  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:22:33.230022  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:22:33.254108  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:22:33.275267  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 09:22:33.297626  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:22:33.351937  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:22:33.378281  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:22:33.404608  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:22:33.427040  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:22:33.447071  481228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:22:33.466681  481228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:22:33.481194  481228 ssh_runner.go:195] Run: openssl version
	I1026 09:22:33.487816  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:22:33.498042  481228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:22:33.502344  481228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:22:33.502420  481228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:22:33.550389  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:22:33.559691  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:22:33.570066  481228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:33.580739  481228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:33.580823  481228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:33.655380  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:22:33.669247  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:22:33.686138  481228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:22:33.690799  481228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:22:33.690873  481228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:22:33.740248  481228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:22:33.751566  481228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:22:33.760422  481228 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:22:33.760533  481228 kubeadm.go:400] StartCluster: {Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:22:33.760656  481228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:22:33.760737  481228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:22:33.795505  481228 cri.go:89] found id: ""
	I1026 09:22:33.795577  481228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:22:33.826577  481228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:22:33.835521  481228 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:22:33.835597  481228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:22:33.845655  481228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:22:33.845675  481228 kubeadm.go:157] found existing configuration files:
	
	I1026 09:22:33.845732  481228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:22:33.859273  481228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:22:33.859430  481228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:22:33.868815  481228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:22:33.877293  481228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:22:33.877379  481228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:22:33.885087  481228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:22:33.893474  481228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:22:33.893539  481228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:22:33.901705  481228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:22:33.910074  481228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:22:33.910150  481228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:22:33.917792  481228 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:22:33.972606  481228 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1026 09:22:33.972757  481228 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:22:34.019876  481228 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:22:34.020049  481228 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:22:34.020128  481228 kubeadm.go:318] OS: Linux
	I1026 09:22:34.020228  481228 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:22:34.020326  481228 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:22:34.020412  481228 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:22:34.020532  481228 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:22:34.020617  481228 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:22:34.020709  481228 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:22:34.020813  481228 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:22:34.020919  481228 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:22:34.021012  481228 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:22:34.118369  481228 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:22:34.118556  481228 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:22:34.118695  481228 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 09:22:34.279269  481228 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:22:29.871329  483549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:22:29.871603  483549 start.go:159] libmachine.API.Create for "default-k8s-diff-port-289159" (driver="docker")
	I1026 09:22:29.871651  483549 client.go:168] LocalClient.Create starting
	I1026 09:22:29.871730  483549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:22:29.871869  483549 main.go:141] libmachine: Decoding PEM data...
	I1026 09:22:29.871933  483549 main.go:141] libmachine: Parsing certificate...
	I1026 09:22:29.872062  483549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:22:29.872149  483549 main.go:141] libmachine: Decoding PEM data...
	I1026 09:22:29.872214  483549 main.go:141] libmachine: Parsing certificate...
	I1026 09:22:29.873107  483549 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-289159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:22:29.897359  483549 cli_runner.go:211] docker network inspect default-k8s-diff-port-289159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:22:29.897459  483549 network_create.go:284] running [docker network inspect default-k8s-diff-port-289159] to gather additional debugging logs...
	I1026 09:22:29.897489  483549 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-289159
	W1026 09:22:29.925901  483549 cli_runner.go:211] docker network inspect default-k8s-diff-port-289159 returned with exit code 1
	I1026 09:22:29.925944  483549 network_create.go:287] error running [docker network inspect default-k8s-diff-port-289159]: docker network inspect default-k8s-diff-port-289159: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-289159 not found
	I1026 09:22:29.925969  483549 network_create.go:289] output of [docker network inspect default-k8s-diff-port-289159]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-289159 not found
	
	** /stderr **
	I1026 09:22:29.926076  483549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:22:29.945049  483549 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:22:29.945425  483549 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:22:29.945655  483549 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:22:29.945955  483549 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ece1bd65f7fe IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:dc:53:4c:2c:18} reservation:<nil>}
	I1026 09:22:29.946568  483549 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a77140}
	I1026 09:22:29.946599  483549 network_create.go:124] attempt to create docker network default-k8s-diff-port-289159 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 09:22:29.946655  483549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-289159 default-k8s-diff-port-289159
	I1026 09:22:30.117741  483549 network_create.go:108] docker network default-k8s-diff-port-289159 192.168.85.0/24 created
	I1026 09:22:30.117776  483549 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-289159" container
	I1026 09:22:30.117861  483549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:22:30.141646  483549 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-289159 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-289159 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:22:30.163827  483549 oci.go:103] Successfully created a docker volume default-k8s-diff-port-289159
	I1026 09:22:30.163929  483549 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-289159-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-289159 --entrypoint /usr/bin/test -v default-k8s-diff-port-289159:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:22:30.740391  483549 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-289159
	I1026 09:22:30.740444  483549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:22:30.740464  483549 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:22:30.740539  483549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-289159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 09:22:34.300846  481228 out.go:252]   - Generating certificates and keys ...
	I1026 09:22:34.300968  481228 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:22:34.301067  481228 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:22:34.876725  481228 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:22:35.596285  483549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-289159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.855711896s)
	I1026 09:22:35.596316  483549 kic.go:203] duration metric: took 4.855848775s to extract preloaded images to volume ...
	W1026 09:22:35.596458  483549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:22:35.596573  483549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:22:35.678960  483549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-289159 --name default-k8s-diff-port-289159 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-289159 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-289159 --network default-k8s-diff-port-289159 --ip 192.168.85.2 --volume default-k8s-diff-port-289159:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:22:36.044385  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Running}}
	I1026 09:22:36.063035  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:22:36.085573  483549 cli_runner.go:164] Run: docker exec default-k8s-diff-port-289159 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:22:36.157018  483549 oci.go:144] the created container "default-k8s-diff-port-289159" has a running status.
	I1026 09:22:36.157054  483549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa...
	I1026 09:22:37.163942  483549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:22:37.194666  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:22:37.221790  483549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:22:37.221810  483549 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-289159 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:22:37.293579  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:22:37.327279  483549 machine.go:93] provisionDockerMachine start ...
	I1026 09:22:37.327378  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:37.355055  483549 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:37.355393  483549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1026 09:22:37.355404  483549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:22:37.646843  483549 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:22:37.646920  483549 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-289159"
	I1026 09:22:37.647034  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:37.676264  483549 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:37.676568  483549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1026 09:22:37.676580  483549 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-289159 && echo "default-k8s-diff-port-289159" | sudo tee /etc/hostname
	I1026 09:22:37.919463  483549 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:22:37.919538  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:37.950963  483549 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:37.951275  483549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1026 09:22:37.951295  483549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-289159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-289159/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-289159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:22:38.164731  483549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:22:38.164814  483549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:22:38.164859  483549 ubuntu.go:190] setting up certificates
	I1026 09:22:38.164897  483549 provision.go:84] configureAuth start
	I1026 09:22:38.164991  483549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:22:38.191627  483549 provision.go:143] copyHostCerts
	I1026 09:22:38.191694  483549 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:22:38.191703  483549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:22:38.191781  483549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:22:38.191884  483549 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:22:38.191889  483549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:22:38.191915  483549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:22:38.191967  483549 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:22:38.191972  483549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:22:38.191994  483549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:22:38.192083  483549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-289159 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-289159 localhost minikube]
	I1026 09:22:38.705516  483549 provision.go:177] copyRemoteCerts
	I1026 09:22:38.705586  483549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:22:38.705633  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:38.723776  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:22:38.832620  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:22:38.856691  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 09:22:38.879618  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:22:38.902904  483549 provision.go:87] duration metric: took 737.970452ms to configureAuth
	I1026 09:22:38.902932  483549 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:22:38.903576  483549 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:22:38.903722  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:38.924513  483549 main.go:141] libmachine: Using SSH client type: native
	I1026 09:22:38.924893  483549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33425 <nil> <nil>}
	I1026 09:22:38.924915  483549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:22:39.253000  483549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:22:39.253028  483549 machine.go:96] duration metric: took 1.925728183s to provisionDockerMachine
	I1026 09:22:39.253046  483549 client.go:171] duration metric: took 9.38138822s to LocalClient.Create
	I1026 09:22:39.253088  483549 start.go:167] duration metric: took 9.381493263s to libmachine.API.Create "default-k8s-diff-port-289159"
	I1026 09:22:39.253101  483549 start.go:293] postStartSetup for "default-k8s-diff-port-289159" (driver="docker")
	I1026 09:22:39.253130  483549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:22:39.253221  483549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:22:39.253283  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:39.289221  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:22:39.399941  483549 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:22:39.404073  483549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:22:39.404108  483549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:22:39.404120  483549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:22:39.404183  483549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:22:39.404274  483549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:22:39.404379  483549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:22:39.412669  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:22:39.436688  483549 start.go:296] duration metric: took 183.564077ms for postStartSetup
	I1026 09:22:39.437095  483549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:22:39.458509  483549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:22:39.458848  483549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:22:39.458901  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:39.484592  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:22:35.459416  481228 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:22:37.745649  481228 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:22:38.375043  481228 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:22:39.071006  481228 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:22:39.071319  481228 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-167519] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:22:39.360967  481228 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:22:39.361225  481228 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-167519] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:22:39.983745  481228 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:22:39.601369  483549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:22:39.607259  483549 start.go:128] duration metric: took 9.739410886s to createHost
	I1026 09:22:39.607286  483549 start.go:83] releasing machines lock for "default-k8s-diff-port-289159", held for 9.739546265s
	I1026 09:22:39.607357  483549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:22:39.640560  483549 ssh_runner.go:195] Run: cat /version.json
	I1026 09:22:39.640640  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:39.640890  483549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:22:39.640956  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:22:39.681950  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:22:39.696770  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:22:39.827591  483549 ssh_runner.go:195] Run: systemctl --version
	I1026 09:22:39.929024  483549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:22:39.986775  483549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:22:39.993031  483549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:22:39.993103  483549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:22:40.061148  483549 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:22:40.061194  483549 start.go:495] detecting cgroup driver to use...
	I1026 09:22:40.061238  483549 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:22:40.061307  483549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:22:40.083761  483549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:22:40.101784  483549 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:22:40.101878  483549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:22:40.128598  483549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:22:40.152113  483549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:22:40.324996  483549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:22:40.488744  483549 docker.go:234] disabling docker service ...
	I1026 09:22:40.488828  483549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:22:40.519831  483549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:22:40.534273  483549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:22:40.689753  483549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:22:40.839727  483549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:22:40.853032  483549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:22:40.869629  483549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:22:40.869710  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.879064  483549 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:22:40.879175  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.888319  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.897662  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.908110  483549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:22:40.916628  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.926139  483549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.940892  483549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:22:40.950396  483549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:22:40.959241  483549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:22:40.967432  483549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:22:41.121778  483549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:22:41.320389  483549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:22:41.320478  483549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:22:41.324325  483549 start.go:563] Will wait 60s for crictl version
	I1026 09:22:41.324442  483549 ssh_runner.go:195] Run: which crictl
	I1026 09:22:41.330777  483549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:22:41.368577  483549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:22:41.368735  483549 ssh_runner.go:195] Run: crio --version
	I1026 09:22:41.404780  483549 ssh_runner.go:195] Run: crio --version
	I1026 09:22:41.451014  483549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:22:40.267441  481228 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:22:40.863483  481228 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:22:40.863739  481228 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:22:41.066482  481228 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:22:41.845105  481228 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:22:42.748864  481228 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:22:42.986342  481228 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:22:42.987126  481228 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:22:42.989874  481228 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:22:41.453910  483549 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-289159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:22:41.476370  483549 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:22:41.480719  483549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:22:41.490861  483549 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:22:41.490964  483549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:22:41.491014  483549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:22:41.531974  483549 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:22:41.531994  483549 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:22:41.532062  483549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:22:41.561207  483549 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:22:41.561278  483549 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:22:41.561300  483549 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1026 09:22:41.561432  483549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-289159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:22:41.561550  483549 ssh_runner.go:195] Run: crio config
	I1026 09:22:41.637264  483549 cni.go:84] Creating CNI manager for ""
	I1026 09:22:41.637338  483549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:22:41.637374  483549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:22:41.637430  483549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-289159 NodeName:default-k8s-diff-port-289159 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:22:41.637624  483549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-289159"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:22:41.637720  483549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:22:41.646605  483549 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:22:41.646761  483549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:22:41.655010  483549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1026 09:22:41.669613  483549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:22:41.684404  483549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1026 09:22:41.704118  483549 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:22:41.708091  483549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:22:41.718614  483549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:22:41.878768  483549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:22:41.897586  483549 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159 for IP: 192.168.85.2
	I1026 09:22:41.897658  483549 certs.go:195] generating shared ca certs ...
	I1026 09:22:41.897690  483549 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:41.897886  483549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:22:41.897971  483549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:22:41.897997  483549 certs.go:257] generating profile certs ...
	I1026 09:22:41.898082  483549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.key
	I1026 09:22:41.898125  483549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt with IP's: []
	I1026 09:22:42.568000  483549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt ...
	I1026 09:22:42.568085  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: {Name:mka175c4cd99baa71ed1bbda742426cdb0411c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:42.568320  483549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.key ...
	I1026 09:22:42.568355  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.key: {Name:mke85d7c6a6c5a8e7d061879d1e736fc29c22b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:42.568502  483549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2
	I1026 09:22:42.568542  483549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt.65278fd2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 09:22:42.915709  483549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt.65278fd2 ...
	I1026 09:22:42.915787  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt.65278fd2: {Name:mkdd293dec675bf0929464e4759d8a8a60a08d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:42.916041  483549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2 ...
	I1026 09:22:42.916078  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2: {Name:mk31630f28295af3cac3ebd5d01b9e2954d64892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:42.916229  483549 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt.65278fd2 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt
	I1026 09:22:42.916367  483549 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key
	I1026 09:22:42.916473  483549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key
	I1026 09:22:42.916511  483549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt with IP's: []
	I1026 09:22:43.384931  483549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt ...
	I1026 09:22:43.384968  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt: {Name:mk55108a485902786b290841495a1ed0732ff6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:43.385162  483549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key ...
	I1026 09:22:43.385181  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key: {Name:mkf37eb87b179856fe689f78d39cea5354eeeef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:22:43.385359  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:22:43.385403  483549 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:22:43.385417  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:22:43.385443  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:22:43.385470  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:22:43.385496  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:22:43.385542  483549 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:22:43.386170  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:22:43.403491  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:22:43.425056  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:22:43.446189  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:22:43.467623  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 09:22:43.487756  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:22:43.511661  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:22:43.532509  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:22:43.553212  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:22:43.599781  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:22:43.625505  483549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:22:43.652577  483549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:22:43.666030  483549 ssh_runner.go:195] Run: openssl version
	I1026 09:22:43.672941  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:22:43.681581  483549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:22:43.685929  483549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:22:43.686045  483549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:22:43.729546  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:22:43.737976  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:22:43.746230  483549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:43.750838  483549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:43.750981  483549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:22:43.792552  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:22:43.800959  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:22:43.809555  483549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:22:43.813931  483549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:22:43.814070  483549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:22:43.856448  483549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:22:43.864838  483549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:22:43.869469  483549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:22:43.869572  483549 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:22:43.869756  483549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:22:43.869856  483549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:22:43.906175  483549 cri.go:89] found id: ""
	I1026 09:22:43.906298  483549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:22:43.915752  483549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:22:43.923640  483549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:22:43.923741  483549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:22:43.933495  483549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:22:43.933566  483549 kubeadm.go:157] found existing configuration files:
	
	I1026 09:22:43.933646  483549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1026 09:22:43.941839  483549 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:22:43.941950  483549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:22:43.949232  483549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1026 09:22:43.957445  483549 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:22:43.957563  483549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:22:43.964739  483549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1026 09:22:43.973011  483549 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:22:43.973126  483549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:22:43.980459  483549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1026 09:22:43.988544  483549 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:22:43.988658  483549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:22:43.995803  483549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:22:44.047114  483549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:22:44.047599  483549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:22:44.079779  483549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:22:44.079945  483549 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:22:44.079988  483549 kubeadm.go:318] OS: Linux
	I1026 09:22:44.080049  483549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:22:44.080103  483549 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:22:44.080154  483549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:22:44.080206  483549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:22:44.080258  483549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:22:44.080310  483549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:22:44.080368  483549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:22:44.080420  483549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:22:44.080470  483549 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:22:44.158574  483549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:22:44.158805  483549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:22:44.158937  483549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:22:44.178271  483549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:22:44.185451  483549 out.go:252]   - Generating certificates and keys ...
	I1026 09:22:44.185614  483549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:22:44.185719  483549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:22:42.994247  481228 out.go:252]   - Booting up control plane ...
	I1026 09:22:42.994358  481228 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:22:42.994512  481228 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:22:43.010381  481228 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:22:43.034875  481228 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:22:43.034977  481228 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:22:43.035020  481228 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:22:43.208561  481228 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 09:22:44.805106  483549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:22:45.273668  483549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:22:45.380040  483549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:22:45.524302  483549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:22:46.353507  483549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:22:46.354141  483549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-289159 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:22:46.973122  483549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:22:46.973329  483549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-289159 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:22:47.890550  483549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:22:48.280190  483549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:22:49.661617  483549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:22:49.661961  483549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:22:50.375022  483549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:22:51.666969  483549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:22:52.187065  483549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:22:52.783347  483549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:22:53.264851  483549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:22:53.265979  483549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:22:53.277116  483549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:22:53.280280  483549 out.go:252]   - Booting up control plane ...
	I1026 09:22:53.280409  483549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:22:53.280744  483549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:22:53.281956  483549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:22:53.312834  483549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:22:53.312956  483549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:22:53.321659  483549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:22:53.321766  483549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:22:53.321809  483549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:22:53.523995  483549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 09:22:53.524131  483549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 09:22:54.527079  483549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001826436s
	I1026 09:22:54.529363  483549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:22:54.529484  483549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1026 09:22:54.529587  483549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:22:54.530223  483549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 09:22:55.204252  481228 kubeadm.go:318] [apiclient] All control plane components are healthy after 12.004880 seconds
	I1026 09:22:55.204379  481228 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:22:55.238894  481228 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:22:55.783242  481228 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:22:55.783772  481228 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-167519 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:22:56.300296  481228 kubeadm.go:318] [bootstrap-token] Using token: wr5jh4.wxw1xp7uvedt37b8
	I1026 09:22:56.303155  481228 out.go:252]   - Configuring RBAC rules ...
	I1026 09:22:56.303280  481228 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:22:56.315055  481228 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:22:56.331938  481228 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:22:56.336393  481228 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:22:56.341293  481228 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:22:56.346460  481228 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:22:56.372451  481228 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:22:56.802274  481228 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:22:56.881447  481228 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:22:56.883018  481228 kubeadm.go:318] 
	I1026 09:22:56.883106  481228 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:22:56.883113  481228 kubeadm.go:318] 
	I1026 09:22:56.883195  481228 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:22:56.883200  481228 kubeadm.go:318] 
	I1026 09:22:56.883226  481228 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:22:56.887115  481228 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:22:56.887187  481228 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:22:56.887194  481228 kubeadm.go:318] 
	I1026 09:22:56.887250  481228 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:22:56.887255  481228 kubeadm.go:318] 
	I1026 09:22:56.887304  481228 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:22:56.887312  481228 kubeadm.go:318] 
	I1026 09:22:56.887390  481228 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:22:56.887469  481228 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:22:56.887541  481228 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:22:56.887545  481228 kubeadm.go:318] 
	I1026 09:22:56.887895  481228 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:22:56.888027  481228 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:22:56.888057  481228 kubeadm.go:318] 
	I1026 09:22:56.888351  481228 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wr5jh4.wxw1xp7uvedt37b8 \
	I1026 09:22:56.888466  481228 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:22:56.888666  481228 kubeadm.go:318] 	--control-plane 
	I1026 09:22:56.888676  481228 kubeadm.go:318] 
	I1026 09:22:56.888967  481228 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:22:56.888977  481228 kubeadm.go:318] 
	I1026 09:22:56.889256  481228 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wr5jh4.wxw1xp7uvedt37b8 \
	I1026 09:22:56.889558  481228 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:22:56.893317  481228 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:22:56.893437  481228 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:22:56.893454  481228 cni.go:84] Creating CNI manager for ""
	I1026 09:22:56.893461  481228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:22:56.896582  481228 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:22:56.899444  481228 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:22:56.911186  481228 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1026 09:22:56.911204  481228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:22:56.958363  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:22:58.989336  481228 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.030924042s)
	I1026 09:22:58.989429  481228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:22:58.989542  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:22:58.989587  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-167519 minikube.k8s.io/updated_at=2025_10_26T09_22_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=old-k8s-version-167519 minikube.k8s.io/primary=true
	I1026 09:22:59.311856  481228 ops.go:34] apiserver oom_adj: -16
	I1026 09:22:59.311963  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:22:59.813003  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:00.694407  483549 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.163981934s
	I1026 09:23:02.113408  483549 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.582783942s
	I1026 09:23:04.031777  483549 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.50197332s
	I1026 09:23:04.052189  483549 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:23:04.078782  483549 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:23:04.097558  483549 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:23:04.097779  483549 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-289159 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:23:04.111057  483549 kubeadm.go:318] [bootstrap-token] Using token: xrxjd6.85w4wtryqvi2rvrv
	I1026 09:23:04.114227  483549 out.go:252]   - Configuring RBAC rules ...
	I1026 09:23:04.114351  483549 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:23:04.122167  483549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:23:04.131383  483549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:23:04.135981  483549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:23:04.140659  483549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:23:04.144949  483549 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:23:04.439833  483549 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:23:00.312981  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:00.812461  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:01.312056  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:01.812699  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:02.312498  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:02.812277  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:03.312492  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:03.812276  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:04.312945  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:04.813009  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:04.899230  483549 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:23:05.439584  483549 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:23:05.440872  483549 kubeadm.go:318] 
	I1026 09:23:05.440959  483549 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:23:05.440978  483549 kubeadm.go:318] 
	I1026 09:23:05.441061  483549 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:23:05.441072  483549 kubeadm.go:318] 
	I1026 09:23:05.441100  483549 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:23:05.441168  483549 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:23:05.441231  483549 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:23:05.441241  483549 kubeadm.go:318] 
	I1026 09:23:05.441298  483549 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:23:05.441305  483549 kubeadm.go:318] 
	I1026 09:23:05.441356  483549 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:23:05.441365  483549 kubeadm.go:318] 
	I1026 09:23:05.441422  483549 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:23:05.441507  483549 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:23:05.441587  483549 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:23:05.441597  483549 kubeadm.go:318] 
	I1026 09:23:05.441697  483549 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:23:05.441784  483549 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:23:05.441793  483549 kubeadm.go:318] 
	I1026 09:23:05.441882  483549 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token xrxjd6.85w4wtryqvi2rvrv \
	I1026 09:23:05.441994  483549 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:23:05.442019  483549 kubeadm.go:318] 	--control-plane 
	I1026 09:23:05.442030  483549 kubeadm.go:318] 
	I1026 09:23:05.442129  483549 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:23:05.442141  483549 kubeadm.go:318] 
	I1026 09:23:05.442228  483549 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token xrxjd6.85w4wtryqvi2rvrv \
	I1026 09:23:05.442339  483549 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:23:05.447342  483549 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:23:05.447584  483549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:23:05.447704  483549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:23:05.447730  483549 cni.go:84] Creating CNI manager for ""
	I1026 09:23:05.447738  483549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:23:05.451044  483549 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:23:05.312913  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:05.812599  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:06.313026  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:06.812706  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:07.312062  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:07.812659  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:08.312103  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:08.812873  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:09.312230  481228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:09.493230  481228 kubeadm.go:1113] duration metric: took 10.503756135s to wait for elevateKubeSystemPrivileges
	I1026 09:23:09.493265  481228 kubeadm.go:402] duration metric: took 35.732737379s to StartCluster
	I1026 09:23:09.493290  481228 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:23:09.493382  481228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:23:09.494091  481228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:23:09.494293  481228 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:23:09.494453  481228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:23:09.494763  481228 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:23:09.494810  481228 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:23:09.494873  481228 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-167519"
	I1026 09:23:09.494897  481228 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-167519"
	I1026 09:23:09.494920  481228 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:23:09.495503  481228 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:23:09.496065  481228 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-167519"
	I1026 09:23:09.496086  481228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-167519"
	I1026 09:23:09.496404  481228 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:23:09.500326  481228 out.go:179] * Verifying Kubernetes components...
	I1026 09:23:05.453949  483549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:23:05.458229  483549 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:23:05.458295  483549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:23:05.491120  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:23:05.849272  483549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:23:05.849426  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:05.849506  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-289159 minikube.k8s.io/updated_at=2025_10_26T09_23_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=default-k8s-diff-port-289159 minikube.k8s.io/primary=true
	I1026 09:23:06.163244  483549 ops.go:34] apiserver oom_adj: -16
	I1026 09:23:06.163365  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:06.663935  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:07.164255  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:07.663921  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:08.163464  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:08.663970  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:09.163442  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:09.505940  481228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:23:09.531012  481228 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-167519"
	I1026 09:23:09.531070  481228 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:23:09.531544  481228 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:23:09.562784  481228 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:23:09.568421  481228 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:23:09.568454  481228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:23:09.568530  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:09.580529  481228 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:23:09.580553  481228 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:23:09.580630  481228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:09.606293  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:09.622946  481228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33420 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:09.664185  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:10.164127  483549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:23:10.471444  483549 kubeadm.go:1113] duration metric: took 4.62205029s to wait for elevateKubeSystemPrivileges
	I1026 09:23:10.471477  483549 kubeadm.go:402] duration metric: took 26.601912268s to StartCluster
	I1026 09:23:10.471495  483549 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:23:10.471558  483549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:23:10.472650  483549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:23:10.472908  483549 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:23:10.473035  483549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:23:10.473298  483549 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:23:10.473355  483549 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:23:10.473427  483549 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-289159"
	I1026 09:23:10.473451  483549 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-289159"
	I1026 09:23:10.473484  483549 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:23:10.474377  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:23:10.474536  483549 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-289159"
	I1026 09:23:10.474561  483549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-289159"
	I1026 09:23:10.474872  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:23:10.479112  483549 out.go:179] * Verifying Kubernetes components...
	I1026 09:23:10.490506  483549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:23:10.519912  483549 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:23:10.522978  483549 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:23:10.522999  483549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:23:10.523058  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:23:10.523860  483549 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-289159"
	I1026 09:23:10.523901  483549 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:23:10.524351  483549 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:23:10.571483  483549 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:23:10.571503  483549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:23:10.571576  483549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:23:10.583574  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:23:10.606102  483549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:23:11.130676  483549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:23:11.278855  483549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:23:11.278950  483549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:23:11.330105  483549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:23:12.634367  483549 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.503655101s)
	I1026 09:23:12.634430  483549 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.355393152s)
	I1026 09:23:12.634444  483549 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 09:23:12.635763  483549 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.35678563s)
	I1026 09:23:12.636582  483549 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:23:12.636931  483549 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.306801221s)
	I1026 09:23:12.700233  483549 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:23:10.084638  481228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:23:10.265625  481228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:23:10.265844  481228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:23:10.287675  481228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:23:12.845476  481228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.760755845s)
	I1026 09:23:12.845560  481228 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.579678061s)
	I1026 09:23:12.846427  481228 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-167519" to be "Ready" ...
	I1026 09:23:12.846814  481228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.581091167s)
	I1026 09:23:12.846833  481228 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 09:23:12.847912  481228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.560155112s)
	I1026 09:23:12.917543  481228 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:23:12.703206  483549 addons.go:514] duration metric: took 2.229829325s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:23:13.140505  483549 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-289159" context rescaled to 1 replicas
	I1026 09:23:12.920582  481228 addons.go:514] duration metric: took 3.425751893s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:23:13.353735  481228 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-167519" context rescaled to 1 replicas
	W1026 09:23:14.849391  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	W1026 09:23:14.640835  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:17.141435  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:16.849520  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	W1026 09:23:18.850242  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	W1026 09:23:19.640551  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:22.139547  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:20.855075  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	W1026 09:23:23.349397  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	W1026 09:23:25.350079  481228 node_ready.go:57] node "old-k8s-version-167519" has "Ready":"False" status (will retry)
	I1026 09:23:25.856383  481228 node_ready.go:49] node "old-k8s-version-167519" is "Ready"
	I1026 09:23:25.856417  481228 node_ready.go:38] duration metric: took 13.009971462s for node "old-k8s-version-167519" to be "Ready" ...
	I1026 09:23:25.856432  481228 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:23:25.856487  481228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:23:25.880173  481228 api_server.go:72] duration metric: took 16.385851202s to wait for apiserver process to appear ...
	I1026 09:23:25.880202  481228 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:23:25.880222  481228 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 09:23:25.891618  481228 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:23:25.893061  481228 api_server.go:141] control plane version: v1.28.0
	I1026 09:23:25.893084  481228 api_server.go:131] duration metric: took 12.874725ms to wait for apiserver health ...
	I1026 09:23:25.893093  481228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:23:25.897899  481228 system_pods.go:59] 8 kube-system pods found
	I1026 09:23:25.897939  481228 system_pods.go:61] "coredns-5dd5756b68-h6qmf" [1226fa58-8832-4d5c-a8e4-e44cc16a164f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:25.897948  481228 system_pods.go:61] "etcd-old-k8s-version-167519" [7aff8350-385d-4207-8ec4-b91b7f5b7b9a] Running
	I1026 09:23:25.897956  481228 system_pods.go:61] "kindnet-ljrzw" [3bab8357-5839-413e-afee-cae96d432734] Running
	I1026 09:23:25.897960  481228 system_pods.go:61] "kube-apiserver-old-k8s-version-167519" [c33f6968-1126-4bc8-ba77-7b62aaecd264] Running
	I1026 09:23:25.897966  481228 system_pods.go:61] "kube-controller-manager-old-k8s-version-167519" [0466c907-04c5-4eec-881c-c5c6230cb462] Running
	I1026 09:23:25.897974  481228 system_pods.go:61] "kube-proxy-nxhdx" [9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c] Running
	I1026 09:23:25.897979  481228 system_pods.go:61] "kube-scheduler-old-k8s-version-167519" [ec0adb8e-5108-49a1-ae0e-d09d3e73d316] Running
	I1026 09:23:25.897985  481228 system_pods.go:61] "storage-provisioner" [6e04a245-ca01-4c7d-9d96-fd35d704d88a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:25.897995  481228 system_pods.go:74] duration metric: took 4.89621ms to wait for pod list to return data ...
	I1026 09:23:25.898005  481228 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:23:25.900910  481228 default_sa.go:45] found service account: "default"
	I1026 09:23:25.900930  481228 default_sa.go:55] duration metric: took 2.914096ms for default service account to be created ...
	I1026 09:23:25.900950  481228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:23:25.904761  481228 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:25.904792  481228 system_pods.go:89] "coredns-5dd5756b68-h6qmf" [1226fa58-8832-4d5c-a8e4-e44cc16a164f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:25.904799  481228 system_pods.go:89] "etcd-old-k8s-version-167519" [7aff8350-385d-4207-8ec4-b91b7f5b7b9a] Running
	I1026 09:23:25.904805  481228 system_pods.go:89] "kindnet-ljrzw" [3bab8357-5839-413e-afee-cae96d432734] Running
	I1026 09:23:25.904810  481228 system_pods.go:89] "kube-apiserver-old-k8s-version-167519" [c33f6968-1126-4bc8-ba77-7b62aaecd264] Running
	I1026 09:23:25.904816  481228 system_pods.go:89] "kube-controller-manager-old-k8s-version-167519" [0466c907-04c5-4eec-881c-c5c6230cb462] Running
	I1026 09:23:25.904819  481228 system_pods.go:89] "kube-proxy-nxhdx" [9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c] Running
	I1026 09:23:25.904823  481228 system_pods.go:89] "kube-scheduler-old-k8s-version-167519" [ec0adb8e-5108-49a1-ae0e-d09d3e73d316] Running
	I1026 09:23:25.904829  481228 system_pods.go:89] "storage-provisioner" [6e04a245-ca01-4c7d-9d96-fd35d704d88a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:25.904850  481228 retry.go:31] will retry after 255.118192ms: missing components: kube-dns
	I1026 09:23:26.164539  481228 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:26.164575  481228 system_pods.go:89] "coredns-5dd5756b68-h6qmf" [1226fa58-8832-4d5c-a8e4-e44cc16a164f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:26.164582  481228 system_pods.go:89] "etcd-old-k8s-version-167519" [7aff8350-385d-4207-8ec4-b91b7f5b7b9a] Running
	I1026 09:23:26.164589  481228 system_pods.go:89] "kindnet-ljrzw" [3bab8357-5839-413e-afee-cae96d432734] Running
	I1026 09:23:26.164594  481228 system_pods.go:89] "kube-apiserver-old-k8s-version-167519" [c33f6968-1126-4bc8-ba77-7b62aaecd264] Running
	I1026 09:23:26.164598  481228 system_pods.go:89] "kube-controller-manager-old-k8s-version-167519" [0466c907-04c5-4eec-881c-c5c6230cb462] Running
	I1026 09:23:26.164602  481228 system_pods.go:89] "kube-proxy-nxhdx" [9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c] Running
	I1026 09:23:26.164606  481228 system_pods.go:89] "kube-scheduler-old-k8s-version-167519" [ec0adb8e-5108-49a1-ae0e-d09d3e73d316] Running
	I1026 09:23:26.164612  481228 system_pods.go:89] "storage-provisioner" [6e04a245-ca01-4c7d-9d96-fd35d704d88a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:26.164628  481228 retry.go:31] will retry after 292.965239ms: missing components: kube-dns
	I1026 09:23:26.462684  481228 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:26.462750  481228 system_pods.go:89] "coredns-5dd5756b68-h6qmf" [1226fa58-8832-4d5c-a8e4-e44cc16a164f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:26.462758  481228 system_pods.go:89] "etcd-old-k8s-version-167519" [7aff8350-385d-4207-8ec4-b91b7f5b7b9a] Running
	I1026 09:23:26.462764  481228 system_pods.go:89] "kindnet-ljrzw" [3bab8357-5839-413e-afee-cae96d432734] Running
	I1026 09:23:26.462769  481228 system_pods.go:89] "kube-apiserver-old-k8s-version-167519" [c33f6968-1126-4bc8-ba77-7b62aaecd264] Running
	I1026 09:23:26.462774  481228 system_pods.go:89] "kube-controller-manager-old-k8s-version-167519" [0466c907-04c5-4eec-881c-c5c6230cb462] Running
	I1026 09:23:26.462778  481228 system_pods.go:89] "kube-proxy-nxhdx" [9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c] Running
	I1026 09:23:26.462783  481228 system_pods.go:89] "kube-scheduler-old-k8s-version-167519" [ec0adb8e-5108-49a1-ae0e-d09d3e73d316] Running
	I1026 09:23:26.462790  481228 system_pods.go:89] "storage-provisioner" [6e04a245-ca01-4c7d-9d96-fd35d704d88a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:26.462809  481228 retry.go:31] will retry after 333.981776ms: missing components: kube-dns
	I1026 09:23:26.801148  481228 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:26.801177  481228 system_pods.go:89] "coredns-5dd5756b68-h6qmf" [1226fa58-8832-4d5c-a8e4-e44cc16a164f] Running
	I1026 09:23:26.801185  481228 system_pods.go:89] "etcd-old-k8s-version-167519" [7aff8350-385d-4207-8ec4-b91b7f5b7b9a] Running
	I1026 09:23:26.801190  481228 system_pods.go:89] "kindnet-ljrzw" [3bab8357-5839-413e-afee-cae96d432734] Running
	I1026 09:23:26.801194  481228 system_pods.go:89] "kube-apiserver-old-k8s-version-167519" [c33f6968-1126-4bc8-ba77-7b62aaecd264] Running
	I1026 09:23:26.801199  481228 system_pods.go:89] "kube-controller-manager-old-k8s-version-167519" [0466c907-04c5-4eec-881c-c5c6230cb462] Running
	I1026 09:23:26.801203  481228 system_pods.go:89] "kube-proxy-nxhdx" [9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c] Running
	I1026 09:23:26.801207  481228 system_pods.go:89] "kube-scheduler-old-k8s-version-167519" [ec0adb8e-5108-49a1-ae0e-d09d3e73d316] Running
	I1026 09:23:26.801211  481228 system_pods.go:89] "storage-provisioner" [6e04a245-ca01-4c7d-9d96-fd35d704d88a] Running
	I1026 09:23:26.801218  481228 system_pods.go:126] duration metric: took 900.262613ms to wait for k8s-apps to be running ...
	I1026 09:23:26.801232  481228 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:23:26.801288  481228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:23:26.814391  481228 system_svc.go:56] duration metric: took 13.148148ms WaitForService to wait for kubelet
	I1026 09:23:26.814476  481228 kubeadm.go:586] duration metric: took 17.320158698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:23:26.814503  481228 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:23:26.817274  481228 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:23:26.817306  481228 node_conditions.go:123] node cpu capacity is 2
	I1026 09:23:26.817321  481228 node_conditions.go:105] duration metric: took 2.812073ms to run NodePressure ...
	I1026 09:23:26.817333  481228 start.go:241] waiting for startup goroutines ...
	I1026 09:23:26.817340  481228 start.go:246] waiting for cluster config update ...
	I1026 09:23:26.817351  481228 start.go:255] writing updated cluster config ...
	I1026 09:23:26.817662  481228 ssh_runner.go:195] Run: rm -f paused
	I1026 09:23:26.821400  481228 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:23:26.826253  481228 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-h6qmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.832219  481228 pod_ready.go:94] pod "coredns-5dd5756b68-h6qmf" is "Ready"
	I1026 09:23:26.832243  481228 pod_ready.go:86] duration metric: took 5.96317ms for pod "coredns-5dd5756b68-h6qmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.835290  481228 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.840653  481228 pod_ready.go:94] pod "etcd-old-k8s-version-167519" is "Ready"
	I1026 09:23:26.840683  481228 pod_ready.go:86] duration metric: took 5.36861ms for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.843732  481228 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.848843  481228 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-167519" is "Ready"
	I1026 09:23:26.848871  481228 pod_ready.go:86] duration metric: took 5.112673ms for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:26.852006  481228 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:27.225894  481228 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-167519" is "Ready"
	I1026 09:23:27.225927  481228 pod_ready.go:86] duration metric: took 373.882684ms for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:27.426695  481228 pod_ready.go:83] waiting for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:27.825158  481228 pod_ready.go:94] pod "kube-proxy-nxhdx" is "Ready"
	I1026 09:23:27.825188  481228 pod_ready.go:86] duration metric: took 398.443286ms for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:28.026139  481228 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:28.425746  481228 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-167519" is "Ready"
	I1026 09:23:28.425775  481228 pod_ready.go:86] duration metric: took 399.604163ms for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:28.425788  481228 pod_ready.go:40] duration metric: took 1.604359273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:23:28.486343  481228 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 09:23:28.489659  481228 out.go:203] 
	W1026 09:23:28.492670  481228 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 09:23:28.495695  481228 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 09:23:28.499504  481228 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-167519" cluster and "default" namespace by default
	W1026 09:23:24.640418  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:27.140274  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:29.640360  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:32.139272  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	W1026 09:23:34.140152  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 09:23:25 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:25.845723345Z" level=info msg="Created container 8bc9e555a232f7e6b09a6e6a51b65e6c7ad2d0bc8505a6d29c8b91ce12ae9a3a: kube-system/coredns-5dd5756b68-h6qmf/coredns" id=419d0399-657a-4a4f-906d-98f71757bbfe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:25 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:25.846987271Z" level=info msg="Starting container: 8bc9e555a232f7e6b09a6e6a51b65e6c7ad2d0bc8505a6d29c8b91ce12ae9a3a" id=9f099eaa-29be-4174-b0c4-36e8011d2be0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:23:25 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:25.851728911Z" level=info msg="Started container" PID=1932 containerID=8bc9e555a232f7e6b09a6e6a51b65e6c7ad2d0bc8505a6d29c8b91ce12ae9a3a description=kube-system/coredns-5dd5756b68-h6qmf/coredns id=9f099eaa-29be-4174-b0c4-36e8011d2be0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc43145d61626a4ad64ab71ca280298a6ced660cfefb09ac2adf6862379523f9
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.051525363Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cd59da02-b504-4e11-bd71-d4d9607aa090 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.05160205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.059478877Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b UID:8840eaba-c0c0-4054-98c5-7062e5c2f5e4 NetNS:/var/run/netns/d2c0154c-8ff5-4bca-8a43-ff50e1120b20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ed380}] Aliases:map[]}"
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.05954163Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.075833195Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b UID:8840eaba-c0c0-4054-98c5-7062e5c2f5e4 NetNS:/var/run/netns/d2c0154c-8ff5-4bca-8a43-ff50e1120b20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ed380}] Aliases:map[]}"
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.075984909Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.080818571Z" level=info msg="Ran pod sandbox 874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b with infra container: default/busybox/POD" id=cd59da02-b504-4e11-bd71-d4d9607aa090 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.08184896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=539d0fc0-2f26-4f99-8430-749ac04fe639 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.082009987Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=539d0fc0-2f26-4f99-8430-749ac04fe639 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.082057151Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=539d0fc0-2f26-4f99-8430-749ac04fe639 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.082791487Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aafc30d5-80c0-4780-848d-67ef3db442d2 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:23:29 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:29.085779496Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.221070822Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=aafc30d5-80c0-4780-848d-67ef3db442d2 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.222521515Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1290fabe-3438-4f22-8fc9-0abddc9be748 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.224971696Z" level=info msg="Creating container: default/busybox/busybox" id=5dedeb83-0d67-4237-bfd0-bc2c665b0029 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.225092896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.229584883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.230026784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.247367568Z" level=info msg="Created container d999956943dcddd4bc6804b4e45bd03a43199ae118295fe632caacf0fb7dce94: default/busybox/busybox" id=5dedeb83-0d67-4237-bfd0-bc2c665b0029 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.249499641Z" level=info msg="Starting container: d999956943dcddd4bc6804b4e45bd03a43199ae118295fe632caacf0fb7dce94" id=0f364ccb-022e-4cf8-b3c0-6bc660a768f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:23:31 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:31.252540689Z" level=info msg="Started container" PID=1995 containerID=d999956943dcddd4bc6804b4e45bd03a43199ae118295fe632caacf0fb7dce94 description=default/busybox/busybox id=0f364ccb-022e-4cf8-b3c0-6bc660a768f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b
	Oct 26 09:23:37 old-k8s-version-167519 crio[844]: time="2025-10-26T09:23:37.923462421Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d999956943dcd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   874855771e3c1       busybox                                          default
	8bc9e555a232f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   bc43145d61626       coredns-5dd5756b68-h6qmf                         kube-system
	bfef32256b1dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   e8a52d7130edf       storage-provisioner                              kube-system
	68c87bc8c58ae       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   ed0a837e19ec4       kindnet-ljrzw                                    kube-system
	df7faa8712989       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   1de6ee767c01a       kube-proxy-nxhdx                                 kube-system
	1aa835ae2e448       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      54 seconds ago      Running             kube-scheduler            0                   51a0dec8e378d       kube-scheduler-old-k8s-version-167519            kube-system
	988635fc352df       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      54 seconds ago      Running             kube-controller-manager   0                   a9c69cf228df2       kube-controller-manager-old-k8s-version-167519   kube-system
	a0bb6e7944d15       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      54 seconds ago      Running             kube-apiserver            0                   6fcffefb69810       kube-apiserver-old-k8s-version-167519            kube-system
	94129e1cecc74       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      54 seconds ago      Running             etcd                      0                   d11eb8894bf01       etcd-old-k8s-version-167519                      kube-system
	
	
	==> coredns [8bc9e555a232f7e6b09a6e6a51b65e6c7ad2d0bc8505a6d29c8b91ce12ae9a3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60910 - 2198 "HINFO IN 6356455098168507122.7821324550251153892. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022332331s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-167519
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-167519
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-167519
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:22:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-167519
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:23:27 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:23:27 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:23:27 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:23:27 +0000   Sun, 26 Oct 2025 09:23:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-167519
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a149092-d049-4ee0-944f-a1babc9259c8
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-h6qmf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-167519                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-ljrzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-167519             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-167519    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-nxhdx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-167519             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-167519 event: Registered Node old-k8s-version-167519 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-167519 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 08:53] overlayfs: idmapped layers are currently not supported
	[Oct26 08:58] overlayfs: idmapped layers are currently not supported
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [94129e1cecc74b7cae70eeada38dedd4f1a7d9a92097838784c7e62428e7f47e] <==
	{"level":"info","ts":"2025-10-26T09:22:45.050915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-26T09:22:45.051106Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T09:22:45.066867Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T09:22:45.067084Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:22:45.074775Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:22:45.075733Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T09:22:45.07584Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T09:22:45.390852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-26T09:22:45.391059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-26T09:22:45.391116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-26T09:22:45.391165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-26T09:22:45.391201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T09:22:45.391239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-26T09:22:45.391274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T09:22:45.399006Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:22:45.399211Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-167519 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T09:22:45.39929Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:22:45.400574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T09:22:45.418785Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:22:45.418969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:22:45.419039Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:22:45.419081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:22:45.420087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T09:22:45.454772Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T09:22:45.462891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:23:39 up  3:06,  0 user,  load average: 2.59, 2.78, 2.62
	Linux old-k8s-version-167519 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [68c87bc8c58ae92d712da19d34f410bb7df4bc3e7c6d8db52446760df3a8bbd6] <==
	I1026 09:23:14.805756       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:23:14.805999       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:23:14.806135       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:23:14.806152       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:23:14.806165       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:23:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:23:15.106090       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:23:15.106182       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:23:15.106220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:23:15.107324       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 09:23:15.306558       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:23:15.306647       1 metrics.go:72] Registering metrics
	I1026 09:23:15.306752       1 controller.go:711] "Syncing nftables rules"
	I1026 09:23:25.112104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:23:25.112170       1 main.go:301] handling current node
	I1026 09:23:35.106817       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:23:35.106923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0bb6e7944d15a35efdae6f563cdbd6b63bdb65d0cfe6a1b60738a8530047cbc] <==
	I1026 09:22:53.031763       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 09:22:53.042499       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 09:22:53.042653       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 09:22:53.066884       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 09:22:53.116482       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 09:22:53.116774       1 aggregator.go:166] initial CRD sync complete...
	I1026 09:22:53.117752       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 09:22:53.117820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:22:53.117852       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:22:53.119734       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:22:53.765842       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 09:22:53.782839       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 09:22:53.782865       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:22:54.856507       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:22:54.961573       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:22:55.101251       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 09:22:55.117194       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 09:22:55.121132       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 09:22:55.132506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:22:55.825784       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 09:22:56.778373       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 09:22:56.801022       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 09:22:56.816984       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 09:23:09.471813       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1026 09:23:09.785827       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [988635fc352df8d0adcb170ea2c603041284814646ec24a490443b00c50de2dc] <==
	I1026 09:23:09.322680       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-167519" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1026 09:23:09.323654       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-167519" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1026 09:23:09.339906       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 09:23:09.557384       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ljrzw"
	I1026 09:23:09.712942       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:23:09.713004       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 09:23:09.716375       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:23:09.724004       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nxhdx"
	I1026 09:23:09.896519       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1026 09:23:10.100672       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-h6qmf"
	I1026 09:23:10.189173       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hcvgp"
	I1026 09:23:10.246210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="350.362145ms"
	I1026 09:23:10.296642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.267407ms"
	I1026 09:23:10.296780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.284µs"
	I1026 09:23:12.939420       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 09:23:13.005299       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hcvgp"
	I1026 09:23:13.020420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.738221ms"
	I1026 09:23:13.044869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.401797ms"
	I1026 09:23:13.045009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.086µs"
	I1026 09:23:25.468043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.029µs"
	I1026 09:23:25.490857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="158.55µs"
	I1026 09:23:26.494109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.249µs"
	I1026 09:23:26.530603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.857123ms"
	I1026 09:23:26.530898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.502µs"
	I1026 09:23:29.225160       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [df7faa8712989a25c7c6f22f5a42279fa4ae493625af9f7aa191879bffd7ca13] <==
	I1026 09:23:10.787647       1 server_others.go:69] "Using iptables proxy"
	I1026 09:23:10.816281       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 09:23:10.863983       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:23:10.868124       1 server_others.go:152] "Using iptables Proxier"
	I1026 09:23:10.868158       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 09:23:10.868167       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 09:23:10.868198       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 09:23:10.868457       1 server.go:846] "Version info" version="v1.28.0"
	I1026 09:23:10.868469       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:23:10.869354       1 config.go:188] "Starting service config controller"
	I1026 09:23:10.869375       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 09:23:10.869393       1 config.go:97] "Starting endpoint slice config controller"
	I1026 09:23:10.869396       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 09:23:10.869869       1 config.go:315] "Starting node config controller"
	I1026 09:23:10.869875       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 09:23:10.969637       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 09:23:10.969692       1 shared_informer.go:318] Caches are synced for service config
	I1026 09:23:10.973353       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1aa835ae2e448a4e74a8274399d228a129aa7da1bdc2f032036860cc1643637e] <==
	I1026 09:22:53.366856       1 serving.go:348] Generated self-signed cert in-memory
	I1026 09:22:55.419178       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 09:22:55.419277       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:22:55.424234       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 09:22:55.424602       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 09:22:55.424662       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 09:22:55.424710       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 09:22:55.426895       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:22:55.426995       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 09:22:55.428076       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:22:55.428105       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 09:22:55.525204       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 09:22:55.527457       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 09:22:55.528519       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 26 09:23:09 old-k8s-version-167519 kubelet[1390]: I1026 09:23:09.924210    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dvv\" (UniqueName: \"kubernetes.io/projected/3bab8357-5839-413e-afee-cae96d432734-kube-api-access-74dvv\") pod \"kindnet-ljrzw\" (UID: \"3bab8357-5839-413e-afee-cae96d432734\") " pod="kube-system/kindnet-ljrzw"
	Oct 26 09:23:09 old-k8s-version-167519 kubelet[1390]: I1026 09:23:09.924296    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bab8357-5839-413e-afee-cae96d432734-xtables-lock\") pod \"kindnet-ljrzw\" (UID: \"3bab8357-5839-413e-afee-cae96d432734\") " pod="kube-system/kindnet-ljrzw"
	Oct 26 09:23:09 old-k8s-version-167519 kubelet[1390]: I1026 09:23:09.924359    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3bab8357-5839-413e-afee-cae96d432734-cni-cfg\") pod \"kindnet-ljrzw\" (UID: \"3bab8357-5839-413e-afee-cae96d432734\") " pod="kube-system/kindnet-ljrzw"
	Oct 26 09:23:09 old-k8s-version-167519 kubelet[1390]: I1026 09:23:09.924388    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bab8357-5839-413e-afee-cae96d432734-lib-modules\") pod \"kindnet-ljrzw\" (UID: \"3bab8357-5839-413e-afee-cae96d432734\") " pod="kube-system/kindnet-ljrzw"
	Oct 26 09:23:10 old-k8s-version-167519 kubelet[1390]: I1026 09:23:10.028848    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c-kube-proxy\") pod \"kube-proxy-nxhdx\" (UID: \"9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c\") " pod="kube-system/kube-proxy-nxhdx"
	Oct 26 09:23:10 old-k8s-version-167519 kubelet[1390]: I1026 09:23:10.029131    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c-xtables-lock\") pod \"kube-proxy-nxhdx\" (UID: \"9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c\") " pod="kube-system/kube-proxy-nxhdx"
	Oct 26 09:23:10 old-k8s-version-167519 kubelet[1390]: I1026 09:23:10.029215    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c-lib-modules\") pod \"kube-proxy-nxhdx\" (UID: \"9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c\") " pod="kube-system/kube-proxy-nxhdx"
	Oct 26 09:23:10 old-k8s-version-167519 kubelet[1390]: I1026 09:23:10.029277    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tng8\" (UniqueName: \"kubernetes.io/projected/9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c-kube-api-access-2tng8\") pod \"kube-proxy-nxhdx\" (UID: \"9e62e5c3-761b-4ab3-b5a7-07a43c8e7c2c\") " pod="kube-system/kube-proxy-nxhdx"
	Oct 26 09:23:10 old-k8s-version-167519 kubelet[1390]: W1026 09:23:10.489004    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-1de6ee767c01a838888aeddba253f036a56c0b61c491d7c1c5fd33d02259e22b WatchSource:0}: Error finding container 1de6ee767c01a838888aeddba253f036a56c0b61c491d7c1c5fd33d02259e22b: Status 404 returned error can't find the container with id 1de6ee767c01a838888aeddba253f036a56c0b61c491d7c1c5fd33d02259e22b
	Oct 26 09:23:11 old-k8s-version-167519 kubelet[1390]: I1026 09:23:11.478614    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nxhdx" podStartSLOduration=2.47857021 podCreationTimestamp="2025-10-26 09:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:11.46923748 +0000 UTC m=+14.741633363" watchObservedRunningTime="2025-10-26 09:23:11.47857021 +0000 UTC m=+14.750966093"
	Oct 26 09:23:17 old-k8s-version-167519 kubelet[1390]: I1026 09:23:17.258938    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-ljrzw" podStartSLOduration=3.903898878 podCreationTimestamp="2025-10-26 09:23:09 +0000 UTC" firstStartedPulling="2025-10-26 09:23:10.366088839 +0000 UTC m=+13.638484714" lastFinishedPulling="2025-10-26 09:23:14.721080952 +0000 UTC m=+17.993476826" observedRunningTime="2025-10-26 09:23:15.473386189 +0000 UTC m=+18.745782064" watchObservedRunningTime="2025-10-26 09:23:17.25889099 +0000 UTC m=+20.531286881"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.434775    1390 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.467888    1390 topology_manager.go:215] "Topology Admit Handler" podUID="1226fa58-8832-4d5c-a8e4-e44cc16a164f" podNamespace="kube-system" podName="coredns-5dd5756b68-h6qmf"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.476376    1390 topology_manager.go:215] "Topology Admit Handler" podUID="6e04a245-ca01-4c7d-9d96-fd35d704d88a" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.584238    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgjf\" (UniqueName: \"kubernetes.io/projected/6e04a245-ca01-4c7d-9d96-fd35d704d88a-kube-api-access-dsgjf\") pod \"storage-provisioner\" (UID: \"6e04a245-ca01-4c7d-9d96-fd35d704d88a\") " pod="kube-system/storage-provisioner"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.584554    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1226fa58-8832-4d5c-a8e4-e44cc16a164f-config-volume\") pod \"coredns-5dd5756b68-h6qmf\" (UID: \"1226fa58-8832-4d5c-a8e4-e44cc16a164f\") " pod="kube-system/coredns-5dd5756b68-h6qmf"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.584603    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6e04a245-ca01-4c7d-9d96-fd35d704d88a-tmp\") pod \"storage-provisioner\" (UID: \"6e04a245-ca01-4c7d-9d96-fd35d704d88a\") " pod="kube-system/storage-provisioner"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: I1026 09:23:25.584642    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7c9x\" (UniqueName: \"kubernetes.io/projected/1226fa58-8832-4d5c-a8e4-e44cc16a164f-kube-api-access-c7c9x\") pod \"coredns-5dd5756b68-h6qmf\" (UID: \"1226fa58-8832-4d5c-a8e4-e44cc16a164f\") " pod="kube-system/coredns-5dd5756b68-h6qmf"
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: W1026 09:23:25.784952    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-e8a52d7130edf4a4b35fa0c5d2dcbc9b26c49c0da95a6c1e94e8b638b598ace4 WatchSource:0}: Error finding container e8a52d7130edf4a4b35fa0c5d2dcbc9b26c49c0da95a6c1e94e8b638b598ace4: Status 404 returned error can't find the container with id e8a52d7130edf4a4b35fa0c5d2dcbc9b26c49c0da95a6c1e94e8b638b598ace4
	Oct 26 09:23:25 old-k8s-version-167519 kubelet[1390]: W1026 09:23:25.802125    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-bc43145d61626a4ad64ab71ca280298a6ced660cfefb09ac2adf6862379523f9 WatchSource:0}: Error finding container bc43145d61626a4ad64ab71ca280298a6ced660cfefb09ac2adf6862379523f9: Status 404 returned error can't find the container with id bc43145d61626a4ad64ab71ca280298a6ced660cfefb09ac2adf6862379523f9
	Oct 26 09:23:26 old-k8s-version-167519 kubelet[1390]: I1026 09:23:26.511554    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h6qmf" podStartSLOduration=16.511510744 podCreationTimestamp="2025-10-26 09:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:26.495470613 +0000 UTC m=+29.767866496" watchObservedRunningTime="2025-10-26 09:23:26.511510744 +0000 UTC m=+29.783906619"
	Oct 26 09:23:28 old-k8s-version-167519 kubelet[1390]: I1026 09:23:28.749247    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.749207476 podCreationTimestamp="2025-10-26 09:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:26.528488102 +0000 UTC m=+29.800883993" watchObservedRunningTime="2025-10-26 09:23:28.749207476 +0000 UTC m=+32.021603350"
	Oct 26 09:23:28 old-k8s-version-167519 kubelet[1390]: I1026 09:23:28.749369    1390 topology_manager.go:215] "Topology Admit Handler" podUID="8840eaba-c0c0-4054-98c5-7062e5c2f5e4" podNamespace="default" podName="busybox"
	Oct 26 09:23:28 old-k8s-version-167519 kubelet[1390]: I1026 09:23:28.802360    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsp7s\" (UniqueName: \"kubernetes.io/projected/8840eaba-c0c0-4054-98c5-7062e5c2f5e4-kube-api-access-lsp7s\") pod \"busybox\" (UID: \"8840eaba-c0c0-4054-98c5-7062e5c2f5e4\") " pod="default/busybox"
	Oct 26 09:23:29 old-k8s-version-167519 kubelet[1390]: W1026 09:23:29.077854    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b WatchSource:0}: Error finding container 874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b: Status 404 returned error can't find the container with id 874855771e3c126123d4a6fa5ba7e932eccec9276275160fbfea42b4ee01807b
	
	
	==> storage-provisioner [bfef32256b1dd04eb23ecd2cc0ec77c488b7592c45eddce580d4d7f13e8ea91f] <==
	I1026 09:23:25.905493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:23:25.918120       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:23:25.918245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 09:23:25.939753       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:23:25.940220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21d96fe7-10ce-4078-9698-96debabaa3e8", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-167519_96436991-c15c-4441-9e05-2920c9fa648d became leader
	I1026 09:23:25.941895       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-167519_96436991-c15c-4441-9e05-2920c9fa648d!
	I1026 09:23:26.042280       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-167519_96436991-c15c-4441-9e05-2920c9fa648d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-167519 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (352.874293ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-289159 describe deploy/metrics-server -n kube-system: exit status 1 (103.56257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-289159 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-289159
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-289159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	        "Created": "2025-10-26T09:22:35.695576526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:22:35.76619463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hosts",
	        "LogPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67-json.log",
	        "Name": "/default-k8s-diff-port-289159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-289159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-289159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	                "LowerDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-289159",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-289159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-289159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "826c95e4d3ed0c47de10b7a1a1c0170db9645d3e00d65c6ff928edb2b93e8278",
	            "SandboxKey": "/var/run/docker/netns/826c95e4d3ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-289159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:d3:82:ce:15:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788f8e4ab8525806628d59d0a963ab3ec20463b77ce93fefea997bd8290d71c3",
	                    "EndpointID": "11ea091b4b558ae093a75ecb095662bfba1ac79b6a0ee259b6e40d2be9723c81",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-289159",
	                        "e75dab2714ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25: (1.905758489s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-796399 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo containerd config dump                                                                                                                                                                                                  │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo crio config                                                                                                                                                                                                             │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ delete  │ -p cilium-796399                                                                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:17 UTC │
	│ start   │ -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:18 UTC │
	│ delete  │ -p force-systemd-env-003748                                                                                                                                                                                                                   │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:18 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:23:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:23:52.908345  488173 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:23:52.909286  488173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:23:52.909328  488173 out.go:374] Setting ErrFile to fd 2...
	I1026 09:23:52.909350  488173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:23:52.909660  488173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:23:52.910101  488173 out.go:368] Setting JSON to false
	I1026 09:23:52.911232  488173 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11183,"bootTime":1761459450,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:23:52.911338  488173 start.go:141] virtualization:  
	I1026 09:23:52.914238  488173 out.go:179] * [old-k8s-version-167519] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:23:52.918303  488173 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:23:52.918362  488173 notify.go:220] Checking for updates...
	I1026 09:23:52.921399  488173 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:23:52.924537  488173 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:23:52.927465  488173 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:23:52.930335  488173 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:23:52.933278  488173 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:23:52.936753  488173 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:23:52.940220  488173 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 09:23:52.943160  488173 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:23:52.981831  488173 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:23:52.981972  488173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:23:53.040747  488173 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:23:53.031515587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:23:53.040860  488173 docker.go:318] overlay module found
	I1026 09:23:53.044068  488173 out.go:179] * Using the docker driver based on existing profile
	I1026 09:23:53.046983  488173 start.go:305] selected driver: docker
	I1026 09:23:53.047006  488173 start.go:925] validating driver "docker" against &{Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:23:53.047128  488173 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:23:53.047820  488173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:23:53.115888  488173 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:23:53.106842199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:23:53.116253  488173 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:23:53.116289  488173 cni.go:84] Creating CNI manager for ""
	I1026 09:23:53.116354  488173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:23:53.116398  488173 start.go:349] cluster config:
	{Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:23:53.121346  488173 out.go:179] * Starting "old-k8s-version-167519" primary control-plane node in "old-k8s-version-167519" cluster
	I1026 09:23:53.124266  488173 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:23:53.127169  488173 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:23:53.129853  488173 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 09:23:53.129919  488173 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 09:23:53.129934  488173 cache.go:58] Caching tarball of preloaded images
	I1026 09:23:53.129941  488173 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:23:53.130018  488173 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:23:53.130029  488173 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 09:23:53.130146  488173 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/config.json ...
	I1026 09:23:53.151876  488173 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:23:53.151898  488173 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:23:53.151917  488173 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:23:53.151941  488173 start.go:360] acquireMachinesLock for old-k8s-version-167519: {Name:mk7a366be3fe0b573e9600c222ca24e96d18d7b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:23:53.152008  488173 start.go:364] duration metric: took 45.785µs to acquireMachinesLock for "old-k8s-version-167519"
	I1026 09:23:53.152054  488173 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:23:53.152065  488173 fix.go:54] fixHost starting: 
	I1026 09:23:53.152340  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:23:53.181511  488173 fix.go:112] recreateIfNeeded on old-k8s-version-167519: state=Stopped err=<nil>
	W1026 09:23:53.181541  488173 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 09:23:51.140181  483549 node_ready.go:57] node "default-k8s-diff-port-289159" has "Ready":"False" status (will retry)
	I1026 09:23:52.139394  483549 node_ready.go:49] node "default-k8s-diff-port-289159" is "Ready"
	I1026 09:23:52.139424  483549 node_ready.go:38] duration metric: took 39.502812273s for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:23:52.139438  483549 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:23:52.139501  483549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:23:52.151167  483549 api_server.go:72] duration metric: took 41.678221144s to wait for apiserver process to appear ...
	I1026 09:23:52.151192  483549 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:23:52.151212  483549 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1026 09:23:52.159265  483549 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1026 09:23:52.160397  483549 api_server.go:141] control plane version: v1.34.1
	I1026 09:23:52.160456  483549 api_server.go:131] duration metric: took 9.25625ms to wait for apiserver health ...
	I1026 09:23:52.160480  483549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:23:52.163893  483549 system_pods.go:59] 8 kube-system pods found
	I1026 09:23:52.163930  483549 system_pods.go:61] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:52.163939  483549 system_pods.go:61] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running
	I1026 09:23:52.163946  483549 system_pods.go:61] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:23:52.163952  483549 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running
	I1026 09:23:52.163958  483549 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running
	I1026 09:23:52.163969  483549 system_pods.go:61] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:23:52.163974  483549 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running
	I1026 09:23:52.163993  483549 system_pods.go:61] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:52.164004  483549 system_pods.go:74] duration metric: took 3.504973ms to wait for pod list to return data ...
	I1026 09:23:52.164013  483549 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:23:52.166500  483549 default_sa.go:45] found service account: "default"
	I1026 09:23:52.166527  483549 default_sa.go:55] duration metric: took 2.508061ms for default service account to be created ...
	I1026 09:23:52.166537  483549 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:23:52.169522  483549 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:52.169556  483549 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:52.169564  483549 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running
	I1026 09:23:52.169570  483549 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:23:52.169574  483549 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running
	I1026 09:23:52.169579  483549 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running
	I1026 09:23:52.169584  483549 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:23:52.169588  483549 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running
	I1026 09:23:52.169594  483549 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:52.169614  483549 retry.go:31] will retry after 233.764441ms: missing components: kube-dns
	I1026 09:23:52.408625  483549 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:52.408665  483549 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:52.408673  483549 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running
	I1026 09:23:52.408679  483549 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:23:52.408685  483549 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running
	I1026 09:23:52.408690  483549 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running
	I1026 09:23:52.408694  483549 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:23:52.408699  483549 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running
	I1026 09:23:52.408706  483549 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:52.408725  483549 retry.go:31] will retry after 322.222831ms: missing components: kube-dns
	I1026 09:23:52.735516  483549 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:52.735549  483549 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:52.735556  483549 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running
	I1026 09:23:52.735562  483549 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:23:52.735568  483549 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running
	I1026 09:23:52.735572  483549 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running
	I1026 09:23:52.735578  483549 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:23:52.735582  483549 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running
	I1026 09:23:52.735588  483549 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:52.735608  483549 retry.go:31] will retry after 444.774942ms: missing components: kube-dns
	I1026 09:23:53.194219  483549 system_pods.go:86] 8 kube-system pods found
	I1026 09:23:53.194249  483549 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:23:53.194258  483549 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running
	I1026 09:23:53.194264  483549 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:23:53.194269  483549 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running
	I1026 09:23:53.194273  483549 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running
	I1026 09:23:53.194277  483549 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:23:53.194281  483549 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running
	I1026 09:23:53.194287  483549 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:23:53.194295  483549 system_pods.go:126] duration metric: took 1.027751951s to wait for k8s-apps to be running ...
	I1026 09:23:53.194303  483549 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:23:53.194357  483549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:23:53.222983  483549 system_svc.go:56] duration metric: took 28.670128ms WaitForService to wait for kubelet
	I1026 09:23:53.223010  483549 kubeadm.go:586] duration metric: took 42.750069678s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:23:53.223031  483549 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:23:53.237715  483549 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:23:53.237747  483549 node_conditions.go:123] node cpu capacity is 2
	I1026 09:23:53.237764  483549 node_conditions.go:105] duration metric: took 14.725416ms to run NodePressure ...
	I1026 09:23:53.237777  483549 start.go:241] waiting for startup goroutines ...
	I1026 09:23:53.237785  483549 start.go:246] waiting for cluster config update ...
	I1026 09:23:53.237795  483549 start.go:255] writing updated cluster config ...
	I1026 09:23:53.238087  483549 ssh_runner.go:195] Run: rm -f paused
	I1026 09:23:53.252652  483549 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:23:53.311696  483549 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.318290  483549 pod_ready.go:94] pod "coredns-66bc5c9577-szwxb" is "Ready"
	I1026 09:23:53.318314  483549 pod_ready.go:86] duration metric: took 6.593192ms for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.372244  483549 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.377738  483549 pod_ready.go:94] pod "etcd-default-k8s-diff-port-289159" is "Ready"
	I1026 09:23:53.377813  483549 pod_ready.go:86] duration metric: took 5.54344ms for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.380983  483549 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.387255  483549 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-289159" is "Ready"
	I1026 09:23:53.387341  483549 pod_ready.go:86] duration metric: took 6.286661ms for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.390549  483549 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.657035  483549 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-289159" is "Ready"
	I1026 09:23:53.657069  483549 pod_ready.go:86] duration metric: took 266.443296ms for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:53.858294  483549 pod_ready.go:83] waiting for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:54.256816  483549 pod_ready.go:94] pod "kube-proxy-kzrr9" is "Ready"
	I1026 09:23:54.256849  483549 pod_ready.go:86] duration metric: took 398.521511ms for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:54.457207  483549 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:54.856364  483549 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-289159" is "Ready"
	I1026 09:23:54.856396  483549 pod_ready.go:86] duration metric: took 399.15928ms for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:23:54.856410  483549 pod_ready.go:40] duration metric: took 1.603726875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:23:54.910196  483549 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:23:54.913324  483549 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-289159" cluster and "default" namespace by default
	I1026 09:23:53.184903  488173 out.go:252] * Restarting existing docker container for "old-k8s-version-167519" ...
	I1026 09:23:53.185001  488173 cli_runner.go:164] Run: docker start old-k8s-version-167519
	I1026 09:23:53.488259  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:23:53.510204  488173 kic.go:430] container "old-k8s-version-167519" state is running.
	I1026 09:23:53.510608  488173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:23:53.532525  488173 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/config.json ...
	I1026 09:23:53.532765  488173 machine.go:93] provisionDockerMachine start ...
	I1026 09:23:53.532833  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:53.554088  488173 main.go:141] libmachine: Using SSH client type: native
	I1026 09:23:53.554414  488173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1026 09:23:53.554424  488173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:23:53.555169  488173 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42310->127.0.0.1:33430: read: connection reset by peer
	I1026 09:23:56.702282  488173 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-167519
	
	I1026 09:23:56.702314  488173 ubuntu.go:182] provisioning hostname "old-k8s-version-167519"
	I1026 09:23:56.702399  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:56.719748  488173 main.go:141] libmachine: Using SSH client type: native
	I1026 09:23:56.720078  488173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1026 09:23:56.720097  488173 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-167519 && echo "old-k8s-version-167519" | sudo tee /etc/hostname
	I1026 09:23:56.881247  488173 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-167519
	
	I1026 09:23:56.881339  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:56.905466  488173 main.go:141] libmachine: Using SSH client type: native
	I1026 09:23:56.905782  488173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1026 09:23:56.905805  488173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-167519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-167519/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-167519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:23:57.059162  488173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:23:57.059188  488173 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:23:57.059219  488173 ubuntu.go:190] setting up certificates
	I1026 09:23:57.059229  488173 provision.go:84] configureAuth start
	I1026 09:23:57.059289  488173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:23:57.079306  488173 provision.go:143] copyHostCerts
	I1026 09:23:57.079385  488173 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:23:57.079401  488173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:23:57.079478  488173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:23:57.079589  488173 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:23:57.079600  488173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:23:57.079628  488173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:23:57.079721  488173 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:23:57.079732  488173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:23:57.079758  488173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:23:57.079808  488173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-167519 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-167519]
	I1026 09:23:57.494653  488173 provision.go:177] copyRemoteCerts
	I1026 09:23:57.494951  488173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:23:57.495051  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:57.519391  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:57.635786  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:23:57.657306  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 09:23:57.675989  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:23:57.693796  488173 provision.go:87] duration metric: took 634.553565ms to configureAuth
	I1026 09:23:57.693825  488173 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:23:57.694014  488173 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:23:57.694122  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:57.711558  488173 main.go:141] libmachine: Using SSH client type: native
	I1026 09:23:57.711885  488173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1026 09:23:57.711906  488173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:23:58.032807  488173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:23:58.032871  488173 machine.go:96] duration metric: took 4.500087431s to provisionDockerMachine
	I1026 09:23:58.032896  488173 start.go:293] postStartSetup for "old-k8s-version-167519" (driver="docker")
	I1026 09:23:58.032923  488173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:23:58.033019  488173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:23:58.033151  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:58.055244  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:58.164523  488173 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:23:58.168077  488173 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:23:58.168110  488173 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:23:58.168122  488173 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:23:58.168179  488173 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:23:58.168269  488173 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:23:58.168382  488173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:23:58.176174  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:23:58.198176  488173 start.go:296] duration metric: took 165.249121ms for postStartSetup
	I1026 09:23:58.198352  488173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:23:58.198442  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:58.220408  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:58.324086  488173 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:23:58.328862  488173 fix.go:56] duration metric: took 5.176789827s for fixHost
	I1026 09:23:58.328889  488173 start.go:83] releasing machines lock for "old-k8s-version-167519", held for 5.176868293s
	I1026 09:23:58.328965  488173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-167519
	I1026 09:23:58.347273  488173 ssh_runner.go:195] Run: cat /version.json
	I1026 09:23:58.347325  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:58.347757  488173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:23:58.347807  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:23:58.368105  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:58.380356  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:23:58.474324  488173 ssh_runner.go:195] Run: systemctl --version
	I1026 09:23:58.568347  488173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:23:58.607292  488173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:23:58.612328  488173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:23:58.612412  488173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:23:58.620577  488173 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:23:58.620601  488173 start.go:495] detecting cgroup driver to use...
	I1026 09:23:58.620631  488173 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:23:58.620681  488173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:23:58.636753  488173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:23:58.650308  488173 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:23:58.650406  488173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:23:58.666358  488173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:23:58.679958  488173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:23:58.801377  488173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:23:58.924269  488173 docker.go:234] disabling docker service ...
	I1026 09:23:58.924342  488173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:23:58.939921  488173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:23:58.955022  488173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:23:59.090335  488173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:23:59.218201  488173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:23:59.232011  488173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:23:59.246266  488173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 09:23:59.246342  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.256104  488173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:23:59.256175  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.265281  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.274358  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.283352  488173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:23:59.291663  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.300442  488173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.309192  488173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:23:59.319762  488173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:23:59.327443  488173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:23:59.335280  488173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:23:59.455167  488173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:23:59.599300  488173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:23:59.599405  488173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:23:59.603131  488173 start.go:563] Will wait 60s for crictl version
	I1026 09:23:59.603195  488173 ssh_runner.go:195] Run: which crictl
	I1026 09:23:59.606665  488173 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:23:59.633090  488173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:23:59.633174  488173 ssh_runner.go:195] Run: crio --version
	I1026 09:23:59.665476  488173 ssh_runner.go:195] Run: crio --version
	I1026 09:23:59.697669  488173 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 09:23:59.700610  488173 cli_runner.go:164] Run: docker network inspect old-k8s-version-167519 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:23:59.722791  488173 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:23:59.726731  488173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:23:59.737392  488173 kubeadm.go:883] updating cluster {Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:23:59.737516  488173 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 09:23:59.737575  488173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:23:59.772099  488173 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:23:59.772124  488173 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:23:59.772181  488173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:23:59.808499  488173 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:23:59.808525  488173 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:23:59.808535  488173 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1026 09:23:59.808637  488173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-167519 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:23:59.808719  488173 ssh_runner.go:195] Run: crio config
	I1026 09:23:59.869813  488173 cni.go:84] Creating CNI manager for ""
	I1026 09:23:59.869885  488173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:23:59.869925  488173 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:23:59.869991  488173 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-167519 NodeName:old-k8s-version-167519 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:23:59.870177  488173 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-167519"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:23:59.870301  488173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 09:23:59.878138  488173 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:23:59.878282  488173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:23:59.886021  488173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 09:23:59.902120  488173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:23:59.915523  488173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1026 09:23:59.929536  488173 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:23:59.933327  488173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:23:59.943922  488173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:00.110665  488173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:00.156766  488173 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519 for IP: 192.168.76.2
	I1026 09:24:00.156790  488173 certs.go:195] generating shared ca certs ...
	I1026 09:24:00.156807  488173 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:00.156970  488173 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:24:00.157013  488173 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:24:00.157021  488173 certs.go:257] generating profile certs ...
	I1026 09:24:00.157117  488173 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.key
	I1026 09:24:00.157185  488173 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key.73d1f48f
	I1026 09:24:00.157229  488173 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key
	I1026 09:24:00.157356  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:24:00.157387  488173 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:24:00.157402  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:24:00.157430  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:24:00.157502  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:24:00.157526  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:24:00.157573  488173 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:24:00.158226  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:24:00.292984  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:24:00.372963  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:24:00.424990  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:24:00.460760  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 09:24:00.487617  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:24:00.516792  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:24:00.542190  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:24:00.576213  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:24:00.597888  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:24:00.618260  488173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:24:00.638327  488173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:24:00.656644  488173 ssh_runner.go:195] Run: openssl version
	I1026 09:24:00.662925  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:24:00.672421  488173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:24:00.676503  488173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:24:00.676610  488173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:24:00.717994  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:24:00.725867  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:24:00.734291  488173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:24:00.738192  488173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:24:00.738316  488173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:24:00.779446  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:24:00.787697  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:24:00.796331  488173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:00.800075  488173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:00.800152  488173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:00.843156  488173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:24:00.851387  488173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:24:00.855712  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:24:00.897314  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:24:00.941556  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:24:00.988256  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:24:01.051848  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:24:01.127186  488173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:24:01.224018  488173 kubeadm.go:400] StartCluster: {Name:old-k8s-version-167519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-167519 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:01.224184  488173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:24:01.224275  488173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:24:01.273027  488173 cri.go:89] found id: "71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995"
	I1026 09:24:01.273106  488173 cri.go:89] found id: "d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6"
	I1026 09:24:01.273126  488173 cri.go:89] found id: "ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3"
	I1026 09:24:01.273150  488173 cri.go:89] found id: "ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc"
	I1026 09:24:01.273168  488173 cri.go:89] found id: ""
	I1026 09:24:01.273242  488173 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:24:01.290657  488173 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:01Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:24:01.290809  488173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:24:01.308049  488173 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:24:01.308123  488173 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:24:01.308188  488173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:24:01.322124  488173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:24:01.322781  488173 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-167519" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:01.323088  488173 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-167519" cluster setting kubeconfig missing "old-k8s-version-167519" context setting]
	I1026 09:24:01.323620  488173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:01.325297  488173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:24:01.341029  488173 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 09:24:01.341106  488173 kubeadm.go:601] duration metric: took 32.963238ms to restartPrimaryControlPlane
	I1026 09:24:01.341131  488173 kubeadm.go:402] duration metric: took 117.121028ms to StartCluster
	I1026 09:24:01.341161  488173 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:01.341243  488173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:01.342292  488173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:01.342569  488173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:24:01.342996  488173 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:24:01.343157  488173 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:24:01.343268  488173 addons.go:69] Setting dashboard=true in profile "old-k8s-version-167519"
	I1026 09:24:01.343290  488173 addons.go:238] Setting addon dashboard=true in "old-k8s-version-167519"
	W1026 09:24:01.343297  488173 addons.go:247] addon dashboard should already be in state true
	I1026 09:24:01.343321  488173 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:24:01.343802  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:24:01.344012  488173 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-167519"
	I1026 09:24:01.344055  488173 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-167519"
	W1026 09:24:01.344076  488173 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:24:01.344112  488173 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:24:01.344576  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:24:01.347237  488173 out.go:179] * Verifying Kubernetes components...
	I1026 09:24:01.347343  488173 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-167519"
	I1026 09:24:01.349628  488173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-167519"
	I1026 09:24:01.350002  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:24:01.351768  488173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:01.387732  488173 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:24:01.393247  488173 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:01.393273  488173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:24:01.393345  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:24:01.422766  488173 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:24:01.423922  488173 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-167519"
	W1026 09:24:01.423940  488173 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:24:01.423963  488173 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:24:01.424381  488173 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:24:01.430783  488173 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:24:01.434844  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:24:01.434876  488173 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:24:01.434969  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:24:01.449660  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:24:01.470999  488173 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:01.471021  488173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:24:01.471087  488173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:24:01.492727  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:24:01.518496  488173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:24:01.657140  488173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:01.688327  488173 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-167519" to be "Ready" ...
	I1026 09:24:01.708271  488173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:01.755892  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:24:01.755913  488173 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:24:01.790754  488173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:01.820270  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:24:01.820335  488173 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:24:01.930006  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:24:01.930093  488173 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:24:02.010910  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:24:02.010987  488173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:24:02.049383  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:24:02.049457  488173 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:24:02.087039  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:24:02.087115  488173 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:24:02.104526  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:24:02.104601  488173 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:24:02.129410  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:24:02.129489  488173 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:24:02.149517  488173 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:24:02.149605  488173 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:24:02.179815  488173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 26 09:23:52 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:52.279257559Z" level=info msg="Created container 4d7a9c233998fa47112e386934e5d39c370de9b25559cd841672499fcea2c189: kube-system/coredns-66bc5c9577-szwxb/coredns" id=89a78cd8-8bd5-4499-9d8d-75c0788f6d3e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:52 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:52.281547894Z" level=info msg="Starting container: 4d7a9c233998fa47112e386934e5d39c370de9b25559cd841672499fcea2c189" id=7657fd47-74a6-4dbe-8e8e-66e3406e098c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:23:52 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:52.286533041Z" level=info msg="Started container" PID=1741 containerID=4d7a9c233998fa47112e386934e5d39c370de9b25559cd841672499fcea2c189 description=kube-system/coredns-66bc5c9577-szwxb/coredns id=7657fd47-74a6-4dbe-8e8e-66e3406e098c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1798fd41fefe7468759f08caa9303f3e3e6ae6eb6a4e60492728dc6a85757d3
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.480324799Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c41d979b-74b2-4d11-be9d-77369b7349c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.480398384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.48549609Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7 UID:10b1f4ca-a577-4294-8ca9-e260f0eb3247 NetNS:/var/run/netns/204c054b-8b8a-4ba2-86ea-797012639d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b2e858}] Aliases:map[]}"
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.485642545Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.496176506Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7 UID:10b1f4ca-a577-4294-8ca9-e260f0eb3247 NetNS:/var/run/netns/204c054b-8b8a-4ba2-86ea-797012639d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b2e858}] Aliases:map[]}"
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.496327088Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.49957897Z" level=info msg="Ran pod sandbox b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7 with infra container: default/busybox/POD" id=c41d979b-74b2-4d11-be9d-77369b7349c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.500965146Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4d770be8-b3c7-4ba3-9fac-f0c7946b3e87 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.501085681Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4d770be8-b3c7-4ba3-9fac-f0c7946b3e87 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.501130769Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4d770be8-b3c7-4ba3-9fac-f0c7946b3e87 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.502920814Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=54d6248a-eba3-43d4-a874-dc86cdad8e1e name=/runtime.v1.ImageService/PullImage
	Oct 26 09:23:55 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:55.504326567Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.57110438Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=54d6248a-eba3-43d4-a874-dc86cdad8e1e name=/runtime.v1.ImageService/PullImage
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.571824488Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9814d4d8-2e21-4c95-9c27-9145300d220a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.575778615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cad32e9c-b528-47c3-97b7-b015f3509e86 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.583110434Z" level=info msg="Creating container: default/busybox/busybox" id=299e874f-b692-42b3-b25b-29183e77c8d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.583380123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.591182382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.591901275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.608011841Z" level=info msg="Created container 9f431dc3565bceee571ba1f79915a98cbad9f7ec314206b0d286bb7cfedf14a2: default/busybox/busybox" id=299e874f-b692-42b3-b25b-29183e77c8d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.609004001Z" level=info msg="Starting container: 9f431dc3565bceee571ba1f79915a98cbad9f7ec314206b0d286bb7cfedf14a2" id=d4ca5099-84bc-44aa-9a23-85aa363268f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:23:57 default-k8s-diff-port-289159 crio[841]: time="2025-10-26T09:23:57.610672929Z" level=info msg="Started container" PID=1795 containerID=9f431dc3565bceee571ba1f79915a98cbad9f7ec314206b0d286bb7cfedf14a2 description=default/busybox/busybox id=d4ca5099-84bc-44aa-9a23-85aa363268f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9f431dc3565bc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   b54d86844f423       busybox                                                default
	4d7a9c233998f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   d1798fd41fefe       coredns-66bc5c9577-szwxb                               kube-system
	f4c2d73cf39bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   ce35c4060dc3f       storage-provisioner                                    kube-system
	3b447c4ff41c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   f734f456562de       kindnet-7kfgn                                          kube-system
	e107d8b0a7f71       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   3727ee8e424fd       kube-proxy-kzrr9                                       kube-system
	8c79d92c56e55       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   16feb81646221       kube-scheduler-default-k8s-diff-port-289159            kube-system
	fe8923e543965       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   50bd72116805a       kube-controller-manager-default-k8s-diff-port-289159   kube-system
	78068913be3f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   2f68baecfe260       etcd-default-k8s-diff-port-289159                      kube-system
	7ed21d964d815       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   03f2485145730       kube-apiserver-default-k8s-diff-port-289159            kube-system
	
	
	==> coredns [4d7a9c233998fa47112e386934e5d39c370de9b25559cd841672499fcea2c189] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53245 - 12080 "HINFO IN 305493108585050338.890857654780394899. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.016602921s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-289159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-289159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-289159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_23_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:23:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-289159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:24:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:24:06 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:24:06 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:24:06 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:24:06 +0000   Sun, 26 Oct 2025 09:23:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-289159
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1e8e5c9f-87b4-4325-9486-aebc60fc37f2
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-szwxb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-289159                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-7kfgn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-289159             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-289159    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-kzrr9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-289159             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-289159 event: Registered Node default-k8s-diff-port-289159 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-289159 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 08:58] overlayfs: idmapped layers are currently not supported
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [78068913be3f679a54c09e8ab39263553a9939295599cc8c04f2ef410566ad1b] <==
	{"level":"warn","ts":"2025-10-26T09:22:59.883386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:22:59.980253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:22:59.987248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.032874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.057344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.076034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.095493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.144465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.157128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.186833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.230333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.260407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.304574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.359561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.413608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.458660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.494296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.531063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.547513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.574324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.604710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.642169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.671881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.699911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:23:00.801070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45620","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:24:06 up  3:06,  0 user,  load average: 2.83, 2.80, 2.63
	Linux default-k8s-diff-port-289159 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b447c4ff41c5594812d017b39192eb7c09b9a40e2036ccc4516758e145ec805] <==
	I1026 09:23:11.314303       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:23:11.400327       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:23:11.400520       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:23:11.400562       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:23:11.400601       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:23:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:23:11.542477       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:23:11.600833       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:23:11.600858       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:23:11.600999       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:23:41.542682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:23:41.601329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:23:41.601336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:23:41.601391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1026 09:23:42.901177       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:23:42.901283       1 metrics.go:72] Registering metrics
	I1026 09:23:42.901355       1 controller.go:711] "Syncing nftables rules"
	I1026 09:23:51.546128       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:23:51.546189       1 main.go:301] handling current node
	I1026 09:24:01.542937       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:24:01.543052       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ed21d964d815fb09e515b74883a368d29fd439674517079192b53d53f931af6] <==
	I1026 09:23:02.064826       1 policy_source.go:240] refreshing policies
	I1026 09:23:02.084785       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:23:02.106234       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 09:23:02.107613       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:23:02.119215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:23:02.121042       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:23:02.269205       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:23:02.657410       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 09:23:02.669435       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 09:23:02.669464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:23:03.567568       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:23:03.632347       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:23:03.704786       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 09:23:03.712959       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 09:23:03.714051       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:23:03.722123       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:23:03.823010       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:23:04.856021       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:23:04.895551       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 09:23:04.930445       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:23:09.541110       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 09:23:09.923792       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:23:10.036196       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:23:10.051540       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1026 09:24:04.367375       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:60852: use of closed network connection
	
	
	==> kube-controller-manager [fe8923e543965d768d7abfb735173fd5c3a976e48ba98062e2ab8672d82f7b6e] <==
	I1026 09:23:08.853634       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:23:08.854201       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 09:23:08.854539       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-289159" podCIDRs=["10.244.0.0/24"]
	I1026 09:23:08.866302       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 09:23:08.867451       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:23:08.868769       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 09:23:08.869165       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:23:08.869313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:23:08.869355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:23:08.869384       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:23:08.869493       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:23:08.869723       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:23:08.875293       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:23:08.877869       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:23:08.877987       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:23:08.878197       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:23:08.878324       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:23:08.878486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:23:08.878708       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 09:23:08.879021       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:23:08.879097       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:23:08.890424       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:23:08.910416       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:23:08.910443       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:23:53.827688       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e107d8b0a7f71f8afa46426804ce66dd0d256a80ced1eae309fd8fa4aac8c3cc] <==
	I1026 09:23:11.142684       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:23:11.352354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:23:11.452820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:23:11.452858       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:23:11.452931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:23:11.513527       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:23:11.513582       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:23:11.525866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:23:11.545578       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:23:11.545619       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:23:11.546914       1 config.go:200] "Starting service config controller"
	I1026 09:23:11.546928       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:23:11.546943       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:23:11.546947       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:23:11.546956       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:23:11.546959       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:23:11.547591       1 config.go:309] "Starting node config controller"
	I1026 09:23:11.547600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:23:11.547606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:23:11.651163       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:23:11.651288       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:23:11.651305       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8c79d92c56e554aa76fa396b9a0c902990544f16475a73e8869ab866899de2b7] <==
	E1026 09:23:02.095704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:23:02.095784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:23:02.095825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:23:02.095875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:23:02.095916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:23:02.095958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:23:02.095996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:23:02.096046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:23:02.107185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:23:02.107334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:23:02.107425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:23:02.107506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:23:02.109027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:23:02.109149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:23:02.109304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:23:02.109470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:23:02.109574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:23:02.109619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:23:02.109695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 09:23:03.115234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:23:03.143007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:23:03.146286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:23:03.179419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:23:03.185014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 09:23:03.784428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.715763    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5264ae13-85bc-421f-944d-439d3eb74d24-cni-cfg\") pod \"kindnet-7kfgn\" (UID: \"5264ae13-85bc-421f-944d-439d3eb74d24\") " pod="kube-system/kindnet-7kfgn"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.715785    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5264ae13-85bc-421f-944d-439d3eb74d24-lib-modules\") pod \"kindnet-7kfgn\" (UID: \"5264ae13-85bc-421f-944d-439d3eb74d24\") " pod="kube-system/kindnet-7kfgn"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.715806    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqp2\" (UniqueName: \"kubernetes.io/projected/5264ae13-85bc-421f-944d-439d3eb74d24-kube-api-access-xkqp2\") pod \"kindnet-7kfgn\" (UID: \"5264ae13-85bc-421f-944d-439d3eb74d24\") " pod="kube-system/kindnet-7kfgn"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: E1026 09:23:09.727920    1303 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-289159\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-289159' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: E1026 09:23:09.728002    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7kfgn\" is forbidden: User \"system:node:default-k8s-diff-port-289159\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-289159' and this object" podUID="5264ae13-85bc-421f-944d-439d3eb74d24" pod="kube-system/kindnet-7kfgn"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.816907    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c20778a-d858-442a-bf2f-03c3e155dcd9-lib-modules\") pod \"kube-proxy-kzrr9\" (UID: \"8c20778a-d858-442a-bf2f-03c3e155dcd9\") " pod="kube-system/kube-proxy-kzrr9"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.817119    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c20778a-d858-442a-bf2f-03c3e155dcd9-xtables-lock\") pod \"kube-proxy-kzrr9\" (UID: \"8c20778a-d858-442a-bf2f-03c3e155dcd9\") " pod="kube-system/kube-proxy-kzrr9"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.817201    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfgh\" (UniqueName: \"kubernetes.io/projected/8c20778a-d858-442a-bf2f-03c3e155dcd9-kube-api-access-2zfgh\") pod \"kube-proxy-kzrr9\" (UID: \"8c20778a-d858-442a-bf2f-03c3e155dcd9\") " pod="kube-system/kube-proxy-kzrr9"
	Oct 26 09:23:09 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:09.817298    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c20778a-d858-442a-bf2f-03c3e155dcd9-kube-proxy\") pod \"kube-proxy-kzrr9\" (UID: \"8c20778a-d858-442a-bf2f-03c3e155dcd9\") " pod="kube-system/kube-proxy-kzrr9"
	Oct 26 09:23:10 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:10.579189    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:23:10 default-k8s-diff-port-289159 kubelet[1303]: W1026 09:23:10.701223    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-3727ee8e424fd49dda659eeebc8cc4ab0e47a6be2c17aa0cf3c5e9dd885f93fe WatchSource:0}: Error finding container 3727ee8e424fd49dda659eeebc8cc4ab0e47a6be2c17aa0cf3c5e9dd885f93fe: Status 404 returned error can't find the container with id 3727ee8e424fd49dda659eeebc8cc4ab0e47a6be2c17aa0cf3c5e9dd885f93fe
	Oct 26 09:23:10 default-k8s-diff-port-289159 kubelet[1303]: W1026 09:23:10.914843    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-f734f456562de2adb80d497fe0bb69007fbf01c05c92be503bc65834efe51f05 WatchSource:0}: Error finding container f734f456562de2adb80d497fe0bb69007fbf01c05c92be503bc65834efe51f05: Status 404 returned error can't find the container with id f734f456562de2adb80d497fe0bb69007fbf01c05c92be503bc65834efe51f05
	Oct 26 09:23:11 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:11.078283    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzrr9" podStartSLOduration=2.078251709 podStartE2EDuration="2.078251709s" podCreationTimestamp="2025-10-26 09:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:11.078215065 +0000 UTC m=+6.383206713" watchObservedRunningTime="2025-10-26 09:23:11.078251709 +0000 UTC m=+6.383243341"
	Oct 26 09:23:13 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:13.059886    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7kfgn" podStartSLOduration=4.059854619 podStartE2EDuration="4.059854619s" podCreationTimestamp="2025-10-26 09:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:12.1055203 +0000 UTC m=+7.410511940" watchObservedRunningTime="2025-10-26 09:23:13.059854619 +0000 UTC m=+8.364846251"
	Oct 26 09:23:51 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:51.831446    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 09:23:51 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:51.961365    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed38531-1f76-46dd-a820-dbd4bfafbfb1-config-volume\") pod \"coredns-66bc5c9577-szwxb\" (UID: \"1ed38531-1f76-46dd-a820-dbd4bfafbfb1\") " pod="kube-system/coredns-66bc5c9577-szwxb"
	Oct 26 09:23:51 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:51.961419    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl42v\" (UniqueName: \"kubernetes.io/projected/1ed38531-1f76-46dd-a820-dbd4bfafbfb1-kube-api-access-sl42v\") pod \"coredns-66bc5c9577-szwxb\" (UID: \"1ed38531-1f76-46dd-a820-dbd4bfafbfb1\") " pod="kube-system/coredns-66bc5c9577-szwxb"
	Oct 26 09:23:51 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:51.961447    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/976e1cd6-3736-49e5-a1da-1d28250279ad-tmp\") pod \"storage-provisioner\" (UID: \"976e1cd6-3736-49e5-a1da-1d28250279ad\") " pod="kube-system/storage-provisioner"
	Oct 26 09:23:51 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:51.961469    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjg2\" (UniqueName: \"kubernetes.io/projected/976e1cd6-3736-49e5-a1da-1d28250279ad-kube-api-access-rqjg2\") pod \"storage-provisioner\" (UID: \"976e1cd6-3736-49e5-a1da-1d28250279ad\") " pod="kube-system/storage-provisioner"
	Oct 26 09:23:52 default-k8s-diff-port-289159 kubelet[1303]: W1026 09:23:52.184555    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-ce35c4060dc3fae18b04100d525936ea524f166b2f90ad7ebc81ad873708b8a6 WatchSource:0}: Error finding container ce35c4060dc3fae18b04100d525936ea524f166b2f90ad7ebc81ad873708b8a6: Status 404 returned error can't find the container with id ce35c4060dc3fae18b04100d525936ea524f166b2f90ad7ebc81ad873708b8a6
	Oct 26 09:23:52 default-k8s-diff-port-289159 kubelet[1303]: W1026 09:23:52.218697    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-d1798fd41fefe7468759f08caa9303f3e3e6ae6eb6a4e60492728dc6a85757d3 WatchSource:0}: Error finding container d1798fd41fefe7468759f08caa9303f3e3e6ae6eb6a4e60492728dc6a85757d3: Status 404 returned error can't find the container with id d1798fd41fefe7468759f08caa9303f3e3e6ae6eb6a4e60492728dc6a85757d3
	Oct 26 09:23:53 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:53.241309    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-szwxb" podStartSLOduration=43.241291153 podStartE2EDuration="43.241291153s" podCreationTimestamp="2025-10-26 09:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:53.190589423 +0000 UTC m=+48.495581055" watchObservedRunningTime="2025-10-26 09:23:53.241291153 +0000 UTC m=+48.546282785"
	Oct 26 09:23:53 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:53.304695    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.304675933 podStartE2EDuration="41.304675933s" podCreationTimestamp="2025-10-26 09:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:23:53.30378848 +0000 UTC m=+48.608780104" watchObservedRunningTime="2025-10-26 09:23:53.304675933 +0000 UTC m=+48.609667557"
	Oct 26 09:23:55 default-k8s-diff-port-289159 kubelet[1303]: I1026 09:23:55.289744    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnptp\" (UniqueName: \"kubernetes.io/projected/10b1f4ca-a577-4294-8ca9-e260f0eb3247-kube-api-access-cnptp\") pod \"busybox\" (UID: \"10b1f4ca-a577-4294-8ca9-e260f0eb3247\") " pod="default/busybox"
	Oct 26 09:23:55 default-k8s-diff-port-289159 kubelet[1303]: W1026 09:23:55.499555    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7 WatchSource:0}: Error finding container b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7: Status 404 returned error can't find the container with id b54d86844f42306cd2618e36cc9ce7a55cb86a6684fd723a5a098a1f1eba1be7
	
	
	==> storage-provisioner [f4c2d73cf39bbdfec6d2fd8dfbb4f22a700b2c4a040c7edb129eed6c9d09c699] <==
	I1026 09:23:52.249547       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:23:52.268343       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:23:52.268391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:23:52.272438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:52.282542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:23:52.282703       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:23:52.289129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_32eba94e-1ccf-43b5-85f2-43d3ec156bfc!
	I1026 09:23:52.290903       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"491e1afa-6b15-4fa8-8df8-cf9dae75b323", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-289159_32eba94e-1ccf-43b5-85f2-43d3ec156bfc became leader
	W1026 09:23:52.291112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:52.325570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:23:52.390294       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_32eba94e-1ccf-43b5-85f2-43d3ec156bfc!
	W1026 09:23:54.328768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:54.335107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:56.338884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:56.343717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:58.347740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:23:58.353646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:00.378200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:00.389538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:02.393205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:02.401108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:04.414965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:04.431430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:06.434601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:24:06.442088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-167519 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-167519 --alsologtostderr -v=1: exit status 80 (1.857740087s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-167519 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:24:51.751861  493244 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:24:51.752088  493244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:51.752143  493244 out.go:374] Setting ErrFile to fd 2...
	I1026 09:24:51.752411  493244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:51.752795  493244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:24:51.753149  493244 out.go:368] Setting JSON to false
	I1026 09:24:51.753167  493244 mustload.go:65] Loading cluster: old-k8s-version-167519
	I1026 09:24:51.753883  493244 config.go:182] Loaded profile config "old-k8s-version-167519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 09:24:51.754585  493244 cli_runner.go:164] Run: docker container inspect old-k8s-version-167519 --format={{.State.Status}}
	I1026 09:24:51.771992  493244 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:24:51.772447  493244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:24:51.837811  493244 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:24:51.82737285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:24:51.838492  493244 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-167519 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:24:51.841916  493244 out.go:179] * Pausing node old-k8s-version-167519 ... 
	I1026 09:24:51.845652  493244 host.go:66] Checking if "old-k8s-version-167519" exists ...
	I1026 09:24:51.846023  493244 ssh_runner.go:195] Run: systemctl --version
	I1026 09:24:51.846073  493244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-167519
	I1026 09:24:51.865106  493244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/old-k8s-version-167519/id_rsa Username:docker}
	I1026 09:24:51.969848  493244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:24:51.986693  493244 pause.go:52] kubelet running: true
	I1026 09:24:51.986840  493244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:24:52.251425  493244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:24:52.251525  493244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:24:52.320805  493244 cri.go:89] found id: "f594970607af9f64c654df6707c1df4091dfe2988957faf28b530298bae0041c"
	I1026 09:24:52.320828  493244 cri.go:89] found id: "cef106e961046296a2cb95911ff65cc35c4668e21eee6d64266403c4b0250c33"
	I1026 09:24:52.320834  493244 cri.go:89] found id: "72253558ec19ee7592abca9835453e0f5cc9ab93df04418f1022780e0b3e9acb"
	I1026 09:24:52.320838  493244 cri.go:89] found id: "25a2d3d5963571b5d87758e7d01d3e8fbafe81732722b0e6ad290d688e909afa"
	I1026 09:24:52.320842  493244 cri.go:89] found id: "661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe"
	I1026 09:24:52.320846  493244 cri.go:89] found id: "71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995"
	I1026 09:24:52.320849  493244 cri.go:89] found id: "d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6"
	I1026 09:24:52.320852  493244 cri.go:89] found id: "ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3"
	I1026 09:24:52.320855  493244 cri.go:89] found id: "ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc"
	I1026 09:24:52.320885  493244 cri.go:89] found id: "092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	I1026 09:24:52.320894  493244 cri.go:89] found id: "1d8d4803015824aa94be1a8ee92ed81a5ee87510fed7adf0d20b310895cc7673"
	I1026 09:24:52.320898  493244 cri.go:89] found id: ""
	I1026 09:24:52.320971  493244 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:24:52.333917  493244 retry.go:31] will retry after 162.119789ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:24:52.496285  493244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:24:52.510314  493244 pause.go:52] kubelet running: false
	I1026 09:24:52.510379  493244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:24:52.686843  493244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:24:52.686976  493244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:24:52.764108  493244 cri.go:89] found id: "f594970607af9f64c654df6707c1df4091dfe2988957faf28b530298bae0041c"
	I1026 09:24:52.764174  493244 cri.go:89] found id: "cef106e961046296a2cb95911ff65cc35c4668e21eee6d64266403c4b0250c33"
	I1026 09:24:52.764193  493244 cri.go:89] found id: "72253558ec19ee7592abca9835453e0f5cc9ab93df04418f1022780e0b3e9acb"
	I1026 09:24:52.764214  493244 cri.go:89] found id: "25a2d3d5963571b5d87758e7d01d3e8fbafe81732722b0e6ad290d688e909afa"
	I1026 09:24:52.764247  493244 cri.go:89] found id: "661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe"
	I1026 09:24:52.764259  493244 cri.go:89] found id: "71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995"
	I1026 09:24:52.764264  493244 cri.go:89] found id: "d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6"
	I1026 09:24:52.764267  493244 cri.go:89] found id: "ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3"
	I1026 09:24:52.764270  493244 cri.go:89] found id: "ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc"
	I1026 09:24:52.764276  493244 cri.go:89] found id: "092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	I1026 09:24:52.764280  493244 cri.go:89] found id: "1d8d4803015824aa94be1a8ee92ed81a5ee87510fed7adf0d20b310895cc7673"
	I1026 09:24:52.764283  493244 cri.go:89] found id: ""
	I1026 09:24:52.764345  493244 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:24:52.776219  493244 retry.go:31] will retry after 473.138356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:24:53.249846  493244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:24:53.263678  493244 pause.go:52] kubelet running: false
	I1026 09:24:53.263759  493244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:24:53.444721  493244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:24:53.444874  493244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:24:53.523816  493244 cri.go:89] found id: "f594970607af9f64c654df6707c1df4091dfe2988957faf28b530298bae0041c"
	I1026 09:24:53.523835  493244 cri.go:89] found id: "cef106e961046296a2cb95911ff65cc35c4668e21eee6d64266403c4b0250c33"
	I1026 09:24:53.523839  493244 cri.go:89] found id: "72253558ec19ee7592abca9835453e0f5cc9ab93df04418f1022780e0b3e9acb"
	I1026 09:24:53.523843  493244 cri.go:89] found id: "25a2d3d5963571b5d87758e7d01d3e8fbafe81732722b0e6ad290d688e909afa"
	I1026 09:24:53.523846  493244 cri.go:89] found id: "661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe"
	I1026 09:24:53.523850  493244 cri.go:89] found id: "71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995"
	I1026 09:24:53.523853  493244 cri.go:89] found id: "d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6"
	I1026 09:24:53.523856  493244 cri.go:89] found id: "ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3"
	I1026 09:24:53.523859  493244 cri.go:89] found id: "ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc"
	I1026 09:24:53.523866  493244 cri.go:89] found id: "092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	I1026 09:24:53.523869  493244 cri.go:89] found id: "1d8d4803015824aa94be1a8ee92ed81a5ee87510fed7adf0d20b310895cc7673"
	I1026 09:24:53.523873  493244 cri.go:89] found id: ""
	I1026 09:24:53.523925  493244 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:24:53.539047  493244 out.go:203] 
	W1026 09:24:53.542157  493244 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:24:53.542233  493244 out.go:285] * 
	* 
	W1026 09:24:53.549462  493244 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:24:53.554410  493244 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-167519 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-167519
helpers_test.go:243: (dbg) docker inspect old-k8s-version-167519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	        "Created": "2025-10-26T09:22:22.22701342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:23:53.220543038Z",
	            "FinishedAt": "2025-10-26T09:23:52.307511755Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hosts",
	        "LogPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2-json.log",
	        "Name": "/old-k8s-version-167519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-167519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-167519",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	                "LowerDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-167519",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-167519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-167519",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d62b2b0867dc6efcc82ee0510af8e183e1996d352f40cf212ea3404bc21e157",
	            "SandboxKey": "/var/run/docker/netns/3d62b2b0867d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-167519": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:93:8d:59:f2:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ece1bd65f7fecf7ce45d18dcdba0500d91ebe98a9871736d6b28c081ea483677",
	                    "EndpointID": "39a360ddd05d9757785a7479bd7dc060fb4e2c56090684754503b361578ee557",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-167519",
	                        "f43cbb714de4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519: exit status 2 (365.087467ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25: (1.421993725s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-796399 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo crio config                                                                                                                                                                                                             │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ delete  │ -p cilium-796399                                                                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:17 UTC │
	│ start   │ -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:18 UTC │
	│ delete  │ -p force-systemd-env-003748                                                                                                                                                                                                                   │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:18 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:24:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:24:20.792818  490787 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:24:20.793025  490787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:20.793053  490787 out.go:374] Setting ErrFile to fd 2...
	I1026 09:24:20.793074  490787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:20.793345  490787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:24:20.793834  490787 out.go:368] Setting JSON to false
	I1026 09:24:20.795234  490787 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11211,"bootTime":1761459450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:24:20.795361  490787 start.go:141] virtualization:  
	I1026 09:24:20.798805  490787 out.go:179] * [default-k8s-diff-port-289159] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:24:20.802051  490787 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:24:20.802188  490787 notify.go:220] Checking for updates...
	I1026 09:24:20.808384  490787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:24:20.813419  490787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:20.817348  490787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:24:20.820476  490787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:24:20.823617  490787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:24:20.827097  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:20.827702  490787 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:24:20.855226  490787 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:24:20.855350  490787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:24:20.921928  490787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:24:20.912527585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:24:20.922039  490787 docker.go:318] overlay module found
	I1026 09:24:20.925037  490787 out.go:179] * Using the docker driver based on existing profile
	I1026 09:24:20.930580  490787 start.go:305] selected driver: docker
	I1026 09:24:20.930607  490787 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:20.930700  490787 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:24:20.931508  490787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:24:20.997657  490787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:24:20.985869895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:24:20.998018  490787 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:24:20.998060  490787 cni.go:84] Creating CNI manager for ""
	I1026 09:24:20.998135  490787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:24:20.998179  490787 start.go:349] cluster config:
	{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:21.003745  490787 out.go:179] * Starting "default-k8s-diff-port-289159" primary control-plane node in "default-k8s-diff-port-289159" cluster
	I1026 09:24:21.006881  490787 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:24:21.009874  490787 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:24:21.012800  490787 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:24:21.012886  490787 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:24:21.012901  490787 cache.go:58] Caching tarball of preloaded images
	I1026 09:24:21.012900  490787 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:24:21.012995  490787 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:24:21.013006  490787 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:24:21.013113  490787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:24:21.042740  490787 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:24:21.042761  490787 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:24:21.042775  490787 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:24:21.042798  490787 start.go:360] acquireMachinesLock for default-k8s-diff-port-289159: {Name:mk7eb4122b0c4e83c8a2504ee91491be3273f817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:24:21.042852  490787 start.go:364] duration metric: took 36.645µs to acquireMachinesLock for "default-k8s-diff-port-289159"
	I1026 09:24:21.042875  490787 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:24:21.042881  490787 fix.go:54] fixHost starting: 
	I1026 09:24:21.043143  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:21.070829  490787 fix.go:112] recreateIfNeeded on default-k8s-diff-port-289159: state=Stopped err=<nil>
	W1026 09:24:21.070860  490787 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 09:24:19.321856  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:21.323367  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:21.074069  490787 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-289159" ...
	I1026 09:24:21.074160  490787 cli_runner.go:164] Run: docker start default-k8s-diff-port-289159
	I1026 09:24:21.386205  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:21.412984  490787 kic.go:430] container "default-k8s-diff-port-289159" state is running.
	I1026 09:24:21.413387  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:21.439583  490787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:24:21.439795  490787 machine.go:93] provisionDockerMachine start ...
	I1026 09:24:21.439856  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:21.470119  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:21.470885  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:21.470902  490787 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:24:21.471534  490787 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 09:24:24.626871  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:24:24.626906  490787 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-289159"
	I1026 09:24:24.626977  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:24.656521  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:24.656863  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:24.656880  490787 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-289159 && echo "default-k8s-diff-port-289159" | sudo tee /etc/hostname
	I1026 09:24:24.836796  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:24:24.836882  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:24.862827  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:24.863135  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:24.863153  490787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-289159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-289159/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-289159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:24:25.023959  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:24:25.023998  490787 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:24:25.024049  490787 ubuntu.go:190] setting up certificates
	I1026 09:24:25.024069  490787 provision.go:84] configureAuth start
	I1026 09:24:25.024144  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:25.050049  490787 provision.go:143] copyHostCerts
	I1026 09:24:25.050125  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:24:25.050147  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:24:25.050226  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:24:25.050346  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:24:25.050358  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:24:25.050386  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:24:25.050450  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:24:25.050458  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:24:25.050481  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:24:25.050582  490787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-289159 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-289159 localhost minikube]
	I1026 09:24:25.241491  490787 provision.go:177] copyRemoteCerts
	I1026 09:24:25.241560  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:24:25.241599  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.262645  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:25.376082  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 09:24:25.400880  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:24:25.426918  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:24:25.447955  490787 provision.go:87] duration metric: took 423.860224ms to configureAuth
	I1026 09:24:25.447986  490787 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:24:25.448195  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:25.448315  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.465610  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:25.465957  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:25.465979  490787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1026 09:24:23.823083  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:26.324457  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:25.868982  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:24:25.869070  490787 machine.go:96] duration metric: took 4.429265127s to provisionDockerMachine
	I1026 09:24:25.869095  490787 start.go:293] postStartSetup for "default-k8s-diff-port-289159" (driver="docker")
	I1026 09:24:25.869135  490787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:24:25.869216  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:24:25.869296  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.895908  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.013551  490787 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:24:26.018399  490787 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:24:26.018427  490787 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:24:26.018439  490787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:24:26.018500  490787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:24:26.018581  490787 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:24:26.018780  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:24:26.029841  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:24:26.059400  490787 start.go:296] duration metric: took 190.260648ms for postStartSetup
	I1026 09:24:26.059554  490787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:24:26.059624  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.083763  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.200913  490787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:24:26.206231  490787 fix.go:56] duration metric: took 5.163342116s for fixHost
	I1026 09:24:26.206252  490787 start.go:83] releasing machines lock for "default-k8s-diff-port-289159", held for 5.163389814s
	I1026 09:24:26.206320  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:26.228096  490787 ssh_runner.go:195] Run: cat /version.json
	I1026 09:24:26.228148  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.228393  490787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:24:26.228444  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.256865  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.259936  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.482476  490787 ssh_runner.go:195] Run: systemctl --version
	I1026 09:24:26.490894  490787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:24:26.540202  490787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:24:26.546116  490787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:24:26.546241  490787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:24:26.557298  490787 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:24:26.557332  490787 start.go:495] detecting cgroup driver to use...
	I1026 09:24:26.557450  490787 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:24:26.557524  490787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:24:26.575619  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:24:26.590571  490787 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:24:26.590689  490787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:24:26.608525  490787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:24:26.623977  490787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:24:26.802435  490787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:24:26.969928  490787 docker.go:234] disabling docker service ...
	I1026 09:24:26.970028  490787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:24:26.990089  490787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:24:27.010181  490787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:24:27.166764  490787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:24:27.324847  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:24:27.340792  490787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:24:27.361661  490787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:24:27.361757  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.373239  490787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:24:27.373354  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.385938  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.395816  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.405889  490787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:24:27.414834  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.424631  490787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.433717  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.451379  490787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:24:27.460499  490787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:24:27.468598  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:27.620184  490787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:24:28.155292  490787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:24:28.155394  490787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:24:28.159922  490787 start.go:563] Will wait 60s for crictl version
	I1026 09:24:28.160051  490787 ssh_runner.go:195] Run: which crictl
	I1026 09:24:28.164400  490787 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:24:28.229751  490787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:24:28.229895  490787 ssh_runner.go:195] Run: crio --version
	I1026 09:24:28.266872  490787 ssh_runner.go:195] Run: crio --version
	I1026 09:24:28.309425  490787 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:24:28.312528  490787 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-289159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:24:28.329418  490787 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:24:28.333423  490787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:24:28.343760  490787 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:24:28.343881  490787 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:24:28.343932  490787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:24:28.402805  490787 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:24:28.402826  490787 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:24:28.402877  490787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:24:28.462547  490787 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:24:28.462566  490787 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:24:28.462574  490787 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1026 09:24:28.462671  490787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-289159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:24:28.462777  490787 ssh_runner.go:195] Run: crio config
	I1026 09:24:28.560956  490787 cni.go:84] Creating CNI manager for ""
	I1026 09:24:28.560989  490787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:24:28.561007  490787 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:24:28.561058  490787 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-289159 NodeName:default-k8s-diff-port-289159 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:24:28.561246  490787 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-289159"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:24:28.561356  490787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:24:28.570801  490787 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:24:28.570919  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:24:28.579222  490787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1026 09:24:28.593201  490787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:24:28.607738  490787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1026 09:24:28.628571  490787 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:24:28.633146  490787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:24:28.643623  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:28.792756  490787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:28.809460  490787 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159 for IP: 192.168.85.2
	I1026 09:24:28.809492  490787 certs.go:195] generating shared ca certs ...
	I1026 09:24:28.809510  490787 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:28.809729  490787 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:24:28.809811  490787 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:24:28.809827  490787 certs.go:257] generating profile certs ...
	I1026 09:24:28.809953  490787 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.key
	I1026 09:24:28.810067  490787 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2
	I1026 09:24:28.810141  490787 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key
	I1026 09:24:28.810300  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:24:28.810365  490787 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:24:28.810384  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:24:28.810429  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:24:28.810474  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:24:28.810520  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:24:28.810601  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:24:28.811510  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:24:28.886850  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:24:28.997139  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:24:29.088225  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:24:29.121387  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 09:24:29.148001  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:24:29.178815  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:24:29.201294  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:24:29.233958  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:24:29.272878  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:24:29.303117  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:24:29.332806  490787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:24:29.355686  490787 ssh_runner.go:195] Run: openssl version
	I1026 09:24:29.363914  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:24:29.379060  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.385613  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.385726  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.428435  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:24:29.437744  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:24:29.447881  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.452266  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.452362  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.559533  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:24:29.583091  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:24:29.601328  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.611717  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.611809  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.702081  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:24:29.718188  490787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:24:29.729270  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:24:29.815864  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:24:29.898693  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:24:29.981711  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:24:30.111912  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:24:30.239681  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:24:30.312576  490787 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:30.312719  490787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:24:30.312818  490787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:24:30.461203  490787 cri.go:89] found id: "003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826"
	I1026 09:24:30.461281  490787 cri.go:89] found id: "4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75"
	I1026 09:24:30.461310  490787 cri.go:89] found id: "958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e"
	I1026 09:24:30.461329  490787 cri.go:89] found id: "97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6"
	I1026 09:24:30.461349  490787 cri.go:89] found id: ""
	I1026 09:24:30.461429  490787 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:24:30.500984  490787 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:30Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:24:30.501105  490787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:24:30.517173  490787 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:24:30.517196  490787 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:24:30.517278  490787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:24:30.531802  490787 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:24:30.532536  490787 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-289159" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:30.532875  490787 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-289159" cluster setting kubeconfig missing "default-k8s-diff-port-289159" context setting]
	I1026 09:24:30.533408  490787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.535347  490787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:24:30.555578  490787 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:24:30.555645  490787 kubeadm.go:601] duration metric: took 38.440725ms to restartPrimaryControlPlane
	I1026 09:24:30.555662  490787 kubeadm.go:402] duration metric: took 243.097489ms to StartCluster
	I1026 09:24:30.555678  490787 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.555787  490787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:30.556842  490787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.557115  490787 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:24:30.557461  490787 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:24:30.557542  490787 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.557560  490787 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.557567  490787 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:24:30.557589  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.558023  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.558640  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:30.558772  490787 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.558808  490787 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.558834  490787 addons.go:247] addon dashboard should already be in state true
	I1026 09:24:30.558874  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.559029  490787 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.559069  490787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-289159"
	I1026 09:24:30.559415  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.559645  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.580493  490787 out.go:179] * Verifying Kubernetes components...
	I1026 09:24:30.585503  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:30.611322  490787 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:24:30.615333  490787 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:24:30.619308  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:24:30.619337  490787 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:24:30.619400  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.624634  490787 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:24:30.629757  490787 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:30.629781  490787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:24:30.629848  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.632847  490787 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.632869  490787 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:24:30.632894  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.633305  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.681126  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:30.698267  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:30.706442  490787 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:30.706470  490787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:24:30.706535  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.735908  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	W1026 09:24:28.831220  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:31.328855  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:31.122369  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:24:31.122410  490787 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:24:31.163069  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:31.179920  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:31.183593  490787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:31.286765  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:24:31.286788  490787 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:24:31.446453  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:24:31.446490  490787 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:24:31.552114  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:24:31.552146  490787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:24:31.681101  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:24:31.681131  490787 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:24:31.708592  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:24:31.708625  490787 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:24:31.738805  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:24:31.738875  490787 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:24:31.764662  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:24:31.764738  490787 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:24:31.788171  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:24:31.788246  490787 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:24:31.826572  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 09:24:33.831237  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:36.321776  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:37.238110  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.075003762s)
	I1026 09:24:39.008363  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.828405668s)
	I1026 09:24:39.008428  490787 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.824811275s)
	I1026 09:24:39.008461  490787 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:24:39.008773  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.182096849s)
	I1026 09:24:39.011761  490787 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-289159 addons enable metrics-server
	
	I1026 09:24:39.014352  490787 node_ready.go:49] node "default-k8s-diff-port-289159" is "Ready"
	I1026 09:24:39.014409  490787 node_ready.go:38] duration metric: took 5.900819ms for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:24:39.014425  490787 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:24:39.014502  490787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:24:39.017756  490787 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 09:24:38.325985  488173 pod_ready.go:94] pod "coredns-5dd5756b68-h6qmf" is "Ready"
	I1026 09:24:38.326017  488173 pod_ready.go:86] duration metric: took 28.010050296s for pod "coredns-5dd5756b68-h6qmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.332197  488173 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.340000  488173 pod_ready.go:94] pod "etcd-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.340041  488173 pod_ready.go:86] duration metric: took 7.805721ms for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.344147  488173 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.355596  488173 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.355627  488173 pod_ready.go:86] duration metric: took 11.440648ms for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.359714  488173 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.524252  488173 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.524288  488173 pod_ready.go:86] duration metric: took 164.544167ms for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.725479  488173 pod_ready.go:83] waiting for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.123189  488173 pod_ready.go:94] pod "kube-proxy-nxhdx" is "Ready"
	I1026 09:24:39.123228  488173 pod_ready.go:86] duration metric: took 397.707344ms for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.324190  488173 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.723436  488173 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-167519" is "Ready"
	I1026 09:24:39.723468  488173 pod_ready.go:86] duration metric: took 399.187774ms for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.723481  488173 pod_ready.go:40] duration metric: took 29.412142997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:24:39.786148  488173 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 09:24:39.789489  488173 out.go:203] 
	W1026 09:24:39.792428  488173 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 09:24:39.795360  488173 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 09:24:39.798373  488173 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-167519" cluster and "default" namespace by default
	I1026 09:24:39.020671  490787 addons.go:514] duration metric: took 8.463195004s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 09:24:39.036018  490787 api_server.go:72] duration metric: took 8.478857212s to wait for apiserver process to appear ...
	I1026 09:24:39.036053  490787 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:24:39.036073  490787 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1026 09:24:39.045306  490787 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1026 09:24:39.047328  490787 api_server.go:141] control plane version: v1.34.1
	I1026 09:24:39.047367  490787 api_server.go:131] duration metric: took 11.306673ms to wait for apiserver health ...
	I1026 09:24:39.047377  490787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:24:39.052684  490787 system_pods.go:59] 8 kube-system pods found
	I1026 09:24:39.052726  490787 system_pods.go:61] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:24:39.052736  490787 system_pods.go:61] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:24:39.052742  490787 system_pods.go:61] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:24:39.052750  490787 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:24:39.052758  490787 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:24:39.052767  490787 system_pods.go:61] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:24:39.052775  490787 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:24:39.052788  490787 system_pods.go:61] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:24:39.052794  490787 system_pods.go:74] duration metric: took 5.410966ms to wait for pod list to return data ...
	I1026 09:24:39.052802  490787 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:24:39.055999  490787 default_sa.go:45] found service account: "default"
	I1026 09:24:39.056025  490787 default_sa.go:55] duration metric: took 3.212661ms for default service account to be created ...
	I1026 09:24:39.056043  490787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:24:39.059018  490787 system_pods.go:86] 8 kube-system pods found
	I1026 09:24:39.059050  490787 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:24:39.059060  490787 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:24:39.059066  490787 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:24:39.059073  490787 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:24:39.059080  490787 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:24:39.059089  490787 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:24:39.059096  490787 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:24:39.059111  490787 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:24:39.059118  490787 system_pods.go:126] duration metric: took 3.069767ms to wait for k8s-apps to be running ...
	I1026 09:24:39.059126  490787 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:24:39.059180  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:24:39.074284  490787 system_svc.go:56] duration metric: took 15.14847ms WaitForService to wait for kubelet
	I1026 09:24:39.074316  490787 kubeadm.go:586] duration metric: took 8.51716086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:24:39.074337  490787 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:24:39.077671  490787 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:24:39.077717  490787 node_conditions.go:123] node cpu capacity is 2
	I1026 09:24:39.077730  490787 node_conditions.go:105] duration metric: took 3.388097ms to run NodePressure ...
	I1026 09:24:39.077745  490787 start.go:241] waiting for startup goroutines ...
	I1026 09:24:39.077753  490787 start.go:246] waiting for cluster config update ...
	I1026 09:24:39.077767  490787 start.go:255] writing updated cluster config ...
	I1026 09:24:39.078059  490787 ssh_runner.go:195] Run: rm -f paused
	I1026 09:24:39.083509  490787 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:24:39.092836  490787 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:24:41.105209  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:43.600503  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:45.602032  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:47.604109  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:50.100554  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.548663649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.558103344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.559188256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.581762783Z" level=info msg="Created container 092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper" id=1ecac2fb-f7e8-41eb-9a26-558a0ae443ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.583116639Z" level=info msg="Starting container: 092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483" id=4191061e-62d9-4507-a1ed-ff20655160df name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.585130211Z" level=info msg="Started container" PID=1660 containerID=092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper id=4191061e-62d9-4507-a1ed-ff20655160df name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf
	Oct 26 09:24:45 old-k8s-version-167519 conmon[1658]: conmon 092688620d13cf3367c5 <ninfo>: container 1660 exited with status 1
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.743041991Z" level=info msg="Removing container: 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.757080876Z" level=info msg="Error loading conmon cgroup of container 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef: cgroup deleted" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.761414191Z" level=info msg="Removed container 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.249310923Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254943045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254977441Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254998389Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260719571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260891183Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260967328Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267675386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267837423Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267912697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.272689842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.273841791Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.273935758Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.282799831Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.283509739Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	092688620d13c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   2                   ba9172b61d371       dashboard-metrics-scraper-5f989dc9cf-nc85s       kubernetes-dashboard
	f594970607af9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           16 seconds ago      Running             storage-provisioner         2                   f446c9ec29cc2       storage-provisioner                              kube-system
	1d8d480301582       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   21 seconds ago      Running             kubernetes-dashboard        0                   47e71626540fa       kubernetes-dashboard-8694d4445c-2z5gd            kubernetes-dashboard
	cef106e961046       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           47 seconds ago      Running             coredns                     1                   6878946b02ee2       coredns-5dd5756b68-h6qmf                         kube-system
	c15736d898229       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           47 seconds ago      Running             busybox                     1                   65099d311817c       busybox                                          default
	72253558ec19e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           47 seconds ago      Running             kube-proxy                  1                   5138ce82347ae       kube-proxy-nxhdx                                 kube-system
	25a2d3d596357       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           47 seconds ago      Running             kindnet-cni                 1                   313bb3731313a       kindnet-ljrzw                                    kube-system
	661f9947ec075       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           47 seconds ago      Exited              storage-provisioner         1                   f446c9ec29cc2       storage-provisioner                              kube-system
	71eca1ab06e9f       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   19b8bd1065fa2       kube-apiserver-old-k8s-version-167519            kube-system
	d6f0b67fc7a92       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   a224cebad46e1       kube-scheduler-old-k8s-version-167519            kube-system
	ab3db05a45deb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   737a3957e93ca       etcd-old-k8s-version-167519                      kube-system
	ebcf8b7c4e306       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   a38684c579e66       kube-controller-manager-old-k8s-version-167519   kube-system
	
	
	==> coredns [cef106e961046296a2cb95911ff65cc35c4668e21eee6d64266403c4b0250c33] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43367 - 32669 "HINFO IN 5005322454416716139.758283188886650633. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014287314s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-167519
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-167519
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-167519
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:22:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-167519
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:23:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-167519
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a149092-d049-4ee0-944f-a1babc9259c8
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-h6qmf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-167519                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-ljrzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-167519             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-167519    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-nxhdx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-167519             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nc85s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2z5gd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 104s                   kube-proxy       
	  Normal  Starting                 45s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     117s                   kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                   kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                   kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           105s                   node-controller  Node old-k8s-version-167519 event: Registered Node old-k8s-version-167519 in Controller
	  Normal  NodeReady                89s                    kubelet          Node old-k8s-version-167519 status is now: NodeReady
	  Normal  Starting                 54s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                    node-controller  Node old-k8s-version-167519 event: Registered Node old-k8s-version-167519 in Controller
	
	
	==> dmesg <==
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3] <==
	{"level":"info","ts":"2025-10-26T09:24:01.306961Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T09:24:01.307057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:24:01.307083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:24:01.310911Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.316803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.317087Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.344414Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:24:01.349941Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:24:01.34435Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T09:24:01.355474Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T09:24:01.355582Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T09:24:02.278818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.293987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-167519 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T09:24:02.294031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:24:02.295001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T09:24:02.295141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:24:02.295931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T09:24:02.322855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T09:24:02.323168Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:24:54 up  3:07,  0 user,  load average: 4.00, 3.13, 2.75
	Linux old-k8s-version-167519 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [25a2d3d5963571b5d87758e7d01d3e8fbafe81732722b0e6ad290d688e909afa] <==
	I1026 09:24:07.015839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:24:07.099147       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:24:07.099353       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:24:07.099395       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:24:07.099448       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:24:07.304730       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:24:07.304759       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:24:07.304767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:24:07.305054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:24:37.249321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:24:37.305058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:24:37.305305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:24:37.305379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1026 09:24:38.505512       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:24:38.505641       1 metrics.go:72] Registering metrics
	I1026 09:24:38.505734       1 controller.go:711] "Syncing nftables rules"
	I1026 09:24:47.248215       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:24:47.248402       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995] <==
	I1026 09:24:06.655972       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:24:06.657512       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 09:24:06.663967       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 09:24:06.666509       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 09:24:06.666584       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:24:06.680903       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 09:24:06.681588       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 09:24:06.683012       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 09:24:06.683070       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 09:24:06.683108       1 aggregator.go:166] initial CRD sync complete...
	I1026 09:24:06.683114       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 09:24:06.683119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:24:06.683125       1 cache.go:39] Caches are synced for autoregister controller
	E1026 09:24:06.975952       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:24:07.136491       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:24:10.053892       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 09:24:10.119039       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 09:24:10.152379       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:24:10.167028       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:24:10.177417       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 09:24:10.232695       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.154.219"}
	I1026 09:24:10.254170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.217.244"}
	I1026 09:24:20.257113       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 09:24:20.273723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:24:20.286254       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc] <==
	I1026 09:24:20.324296       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 09:24:20.333966       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nc85s"
	I1026 09:24:20.334391       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-2z5gd"
	I1026 09:24:20.342880       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 09:24:20.353498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.554386ms"
	I1026 09:24:20.374316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.313608ms"
	I1026 09:24:20.387100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.550507ms"
	I1026 09:24:20.387480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.203µs"
	I1026 09:24:20.395255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="179.686µs"
	I1026 09:24:20.411242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.951289ms"
	I1026 09:24:20.411398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.222µs"
	I1026 09:24:20.427426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.092µs"
	I1026 09:24:20.445862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.893µs"
	I1026 09:24:20.679932       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:24:20.679966       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 09:24:20.740666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:24:25.681264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.516µs"
	I1026 09:24:26.683317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.944µs"
	I1026 09:24:27.709486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.127µs"
	I1026 09:24:33.759511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.562855ms"
	I1026 09:24:33.760473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.619µs"
	I1026 09:24:38.086265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.443743ms"
	I1026 09:24:38.087314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.672µs"
	I1026 09:24:45.762096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.607µs"
	I1026 09:24:50.711974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.316µs"
	
	
	==> kube-proxy [72253558ec19ee7592abca9835453e0f5cc9ab93df04418f1022780e0b3e9acb] <==
	I1026 09:24:09.036971       1 server_others.go:69] "Using iptables proxy"
	I1026 09:24:09.093456       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 09:24:09.639581       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:24:09.641882       1 server_others.go:152] "Using iptables Proxier"
	I1026 09:24:09.641974       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 09:24:09.642005       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 09:24:09.642059       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 09:24:09.642291       1 server.go:846] "Version info" version="v1.28.0"
	I1026 09:24:09.642484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:09.683590       1 config.go:188] "Starting service config controller"
	I1026 09:24:09.683621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 09:24:09.683639       1 config.go:97] "Starting endpoint slice config controller"
	I1026 09:24:09.683642       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 09:24:09.688038       1 config.go:315] "Starting node config controller"
	I1026 09:24:09.688061       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 09:24:09.783762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 09:24:09.783814       1 shared_informer.go:318] Caches are synced for service config
	I1026 09:24:09.809315       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6] <==
	I1026 09:24:05.048420       1 serving.go:348] Generated self-signed cert in-memory
	I1026 09:24:09.546126       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 09:24:09.546237       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:09.553728       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 09:24:09.553915       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 09:24:09.553956       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 09:24:09.554000       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 09:24:09.574572       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:09.579545       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 09:24:09.579741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:09.579957       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 09:24:09.655362       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 09:24:09.682830       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 09:24:09.682904       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474089     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/231051f5-8219-4d3d-8e81-8c0c018c2ab0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nc85s\" (UID: \"231051f5-8219-4d3d-8e81-8c0c018c2ab0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474252     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdnsd\" (UniqueName: \"kubernetes.io/projected/fab603ff-4a4d-4c2c-9dc6-16afee3b82cc-kube-api-access-zdnsd\") pod \"kubernetes-dashboard-8694d4445c-2z5gd\" (UID: \"fab603ff-4a4d-4c2c-9dc6-16afee3b82cc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474387     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fab603ff-4a4d-4c2c-9dc6-16afee3b82cc-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2z5gd\" (UID: \"fab603ff-4a4d-4c2c-9dc6-16afee3b82cc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: W1026 09:24:20.755038     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf WatchSource:0}: Error finding container ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf: Status 404 returned error can't find the container with id ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: W1026 09:24:20.761644     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c WatchSource:0}: Error finding container 47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c: Status 404 returned error can't find the container with id 47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c
	Oct 26 09:24:25 old-k8s-version-167519 kubelet[775]: I1026 09:24:25.650866     775 scope.go:117] "RemoveContainer" containerID="c2b1f682f66434f97b55952cf1907048974b8ef9fb796e33d4d87fdd1ca11021"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: I1026 09:24:26.655033     775 scope.go:117] "RemoveContainer" containerID="c2b1f682f66434f97b55952cf1907048974b8ef9fb796e33d4d87fdd1ca11021"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: I1026 09:24:26.655409     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: E1026 09:24:26.655728     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:27 old-k8s-version-167519 kubelet[775]: I1026 09:24:27.666902     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:27 old-k8s-version-167519 kubelet[775]: E1026 09:24:27.667209     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:30 old-k8s-version-167519 kubelet[775]: I1026 09:24:30.691287     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:30 old-k8s-version-167519 kubelet[775]: E1026 09:24:30.691640     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:37 old-k8s-version-167519 kubelet[775]: I1026 09:24:37.715763     775 scope.go:117] "RemoveContainer" containerID="661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe"
	Oct 26 09:24:37 old-k8s-version-167519 kubelet[775]: I1026 09:24:37.757967     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd" podStartSLOduration=5.721598618 podCreationTimestamp="2025-10-26 09:24:20 +0000 UTC" firstStartedPulling="2025-10-26 09:24:20.770915966 +0000 UTC m=+20.629990141" lastFinishedPulling="2025-10-26 09:24:32.807212176 +0000 UTC m=+32.666286343" observedRunningTime="2025-10-26 09:24:33.727965138 +0000 UTC m=+33.587039305" watchObservedRunningTime="2025-10-26 09:24:37.75789482 +0000 UTC m=+37.616968995"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.545032     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.737965     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.738244     775 scope.go:117] "RemoveContainer" containerID="092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: E1026 09:24:45.738531     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:50 old-k8s-version-167519 kubelet[775]: I1026 09:24:50.691231     775 scope.go:117] "RemoveContainer" containerID="092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	Oct 26 09:24:50 old-k8s-version-167519 kubelet[775]: E1026 09:24:50.692030     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:52 old-k8s-version-167519 kubelet[775]: I1026 09:24:52.188168     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1d8d4803015824aa94be1a8ee92ed81a5ee87510fed7adf0d20b310895cc7673] <==
	2025/10/26 09:24:32 Starting overwatch
	2025/10/26 09:24:32 Using namespace: kubernetes-dashboard
	2025/10/26 09:24:32 Using in-cluster config to connect to apiserver
	2025/10/26 09:24:32 Using secret token for csrf signing
	2025/10/26 09:24:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:24:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:24:32 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 09:24:32 Generating JWE encryption key
	2025/10/26 09:24:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:24:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:24:33 Initializing JWE encryption key from synchronized object
	2025/10/26 09:24:33 Creating in-cluster Sidecar client
	2025/10/26 09:24:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:24:33 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe] <==
	I1026 09:24:07.521974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:24:37.524345       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f594970607af9f64c654df6707c1df4091dfe2988957faf28b530298bae0041c] <==
	I1026 09:24:37.824830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:24:37.856458       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:24:37.856502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 09:24:55.308610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:24:55.308681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21d96fe7-10ce-4078-9698-96debabaa3e8", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-167519_586f38bd-c1d8-480b-8390-da2cf031c676 became leader
	I1026 09:24:55.308781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-167519_586f38bd-c1d8-480b-8390-da2cf031c676!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-167519 -n old-k8s-version-167519: exit status 2 (379.338885ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-167519 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-167519
helpers_test.go:243: (dbg) docker inspect old-k8s-version-167519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	        "Created": "2025-10-26T09:22:22.22701342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:23:53.220543038Z",
	            "FinishedAt": "2025-10-26T09:23:52.307511755Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/hosts",
	        "LogPath": "/var/lib/docker/containers/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2-json.log",
	        "Name": "/old-k8s-version-167519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-167519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-167519",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2",
	                "LowerDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a483229368b0404d7e5b106ca530b61bbda229a5e9842fb384bcbbca5aa9f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-167519",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-167519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-167519",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-167519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d62b2b0867dc6efcc82ee0510af8e183e1996d352f40cf212ea3404bc21e157",
	            "SandboxKey": "/var/run/docker/netns/3d62b2b0867d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-167519": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:93:8d:59:f2:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ece1bd65f7fecf7ce45d18dcdba0500d91ebe98a9871736d6b28c081ea483677",
	                    "EndpointID": "39a360ddd05d9757785a7479bd7dc060fb4e2c56090684754503b361578ee557",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-167519",
	                        "f43cbb714de4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519: exit status 2 (372.011768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-167519 logs -n 25: (1.330794414s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-796399 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ ssh     │ -p cilium-796399 sudo crio config                                                                                                                                                                                                             │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │                     │
	│ delete  │ -p cilium-796399                                                                                                                                                                                                                              │ cilium-796399                │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:17 UTC │
	│ start   │ -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:17 UTC │ 26 Oct 25 09:18 UTC │
	│ delete  │ -p force-systemd-env-003748                                                                                                                                                                                                                   │ force-systemd-env-003748     │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:18 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:24:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:24:20.792818  490787 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:24:20.793025  490787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:20.793053  490787 out.go:374] Setting ErrFile to fd 2...
	I1026 09:24:20.793074  490787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:24:20.793345  490787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:24:20.793834  490787 out.go:368] Setting JSON to false
	I1026 09:24:20.795234  490787 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11211,"bootTime":1761459450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:24:20.795361  490787 start.go:141] virtualization:  
	I1026 09:24:20.798805  490787 out.go:179] * [default-k8s-diff-port-289159] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:24:20.802051  490787 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:24:20.802188  490787 notify.go:220] Checking for updates...
	I1026 09:24:20.808384  490787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:24:20.813419  490787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:20.817348  490787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:24:20.820476  490787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:24:20.823617  490787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:24:20.827097  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:20.827702  490787 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:24:20.855226  490787 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:24:20.855350  490787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:24:20.921928  490787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:24:20.912527585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:24:20.922039  490787 docker.go:318] overlay module found
	I1026 09:24:20.925037  490787 out.go:179] * Using the docker driver based on existing profile
	I1026 09:24:20.930580  490787 start.go:305] selected driver: docker
	I1026 09:24:20.930607  490787 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:20.930700  490787 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:24:20.931508  490787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:24:20.997657  490787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:24:20.985869895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:24:20.998018  490787 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:24:20.998060  490787 cni.go:84] Creating CNI manager for ""
	I1026 09:24:20.998135  490787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:24:20.998179  490787 start.go:349] cluster config:
	{Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:21.003745  490787 out.go:179] * Starting "default-k8s-diff-port-289159" primary control-plane node in "default-k8s-diff-port-289159" cluster
	I1026 09:24:21.006881  490787 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:24:21.009874  490787 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:24:21.012800  490787 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:24:21.012886  490787 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:24:21.012901  490787 cache.go:58] Caching tarball of preloaded images
	I1026 09:24:21.012900  490787 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:24:21.012995  490787 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:24:21.013006  490787 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:24:21.013113  490787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:24:21.042740  490787 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:24:21.042761  490787 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:24:21.042775  490787 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:24:21.042798  490787 start.go:360] acquireMachinesLock for default-k8s-diff-port-289159: {Name:mk7eb4122b0c4e83c8a2504ee91491be3273f817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:24:21.042852  490787 start.go:364] duration metric: took 36.645µs to acquireMachinesLock for "default-k8s-diff-port-289159"
	I1026 09:24:21.042875  490787 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:24:21.042881  490787 fix.go:54] fixHost starting: 
	I1026 09:24:21.043143  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:21.070829  490787 fix.go:112] recreateIfNeeded on default-k8s-diff-port-289159: state=Stopped err=<nil>
	W1026 09:24:21.070860  490787 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 09:24:19.321856  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:21.323367  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:21.074069  490787 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-289159" ...
	I1026 09:24:21.074160  490787 cli_runner.go:164] Run: docker start default-k8s-diff-port-289159
	I1026 09:24:21.386205  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:21.412984  490787 kic.go:430] container "default-k8s-diff-port-289159" state is running.
	I1026 09:24:21.413387  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:21.439583  490787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/config.json ...
	I1026 09:24:21.439795  490787 machine.go:93] provisionDockerMachine start ...
	I1026 09:24:21.439856  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:21.470119  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:21.470885  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:21.470902  490787 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:24:21.471534  490787 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 09:24:24.626871  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:24:24.626906  490787 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-289159"
	I1026 09:24:24.626977  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:24.656521  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:24.656863  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:24.656880  490787 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-289159 && echo "default-k8s-diff-port-289159" | sudo tee /etc/hostname
	I1026 09:24:24.836796  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289159
	
	I1026 09:24:24.836882  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:24.862827  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:24.863135  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:24.863153  490787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-289159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-289159/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-289159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:24:25.023959  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:24:25.023998  490787 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:24:25.024049  490787 ubuntu.go:190] setting up certificates
	I1026 09:24:25.024069  490787 provision.go:84] configureAuth start
	I1026 09:24:25.024144  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:25.050049  490787 provision.go:143] copyHostCerts
	I1026 09:24:25.050125  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:24:25.050147  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:24:25.050226  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:24:25.050346  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:24:25.050358  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:24:25.050386  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:24:25.050450  490787 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:24:25.050458  490787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:24:25.050481  490787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:24:25.050582  490787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-289159 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-289159 localhost minikube]
	I1026 09:24:25.241491  490787 provision.go:177] copyRemoteCerts
	I1026 09:24:25.241560  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:24:25.241599  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.262645  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:25.376082  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 09:24:25.400880  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:24:25.426918  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:24:25.447955  490787 provision.go:87] duration metric: took 423.860224ms to configureAuth
	I1026 09:24:25.447986  490787 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:24:25.448195  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:25.448315  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.465610  490787 main.go:141] libmachine: Using SSH client type: native
	I1026 09:24:25.465957  490787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1026 09:24:25.465979  490787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1026 09:24:23.823083  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:26.324457  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:25.868982  490787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:24:25.869070  490787 machine.go:96] duration metric: took 4.429265127s to provisionDockerMachine
	I1026 09:24:25.869095  490787 start.go:293] postStartSetup for "default-k8s-diff-port-289159" (driver="docker")
	I1026 09:24:25.869135  490787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:24:25.869216  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:24:25.869296  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:25.895908  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.013551  490787 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:24:26.018399  490787 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:24:26.018427  490787 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:24:26.018439  490787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:24:26.018500  490787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:24:26.018581  490787 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:24:26.018780  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:24:26.029841  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:24:26.059400  490787 start.go:296] duration metric: took 190.260648ms for postStartSetup
	I1026 09:24:26.059554  490787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:24:26.059624  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.083763  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.200913  490787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:24:26.206231  490787 fix.go:56] duration metric: took 5.163342116s for fixHost
	I1026 09:24:26.206252  490787 start.go:83] releasing machines lock for "default-k8s-diff-port-289159", held for 5.163389814s
	I1026 09:24:26.206320  490787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-289159
	I1026 09:24:26.228096  490787 ssh_runner.go:195] Run: cat /version.json
	I1026 09:24:26.228148  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.228393  490787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:24:26.228444  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:26.256865  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.259936  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:26.482476  490787 ssh_runner.go:195] Run: systemctl --version
	I1026 09:24:26.490894  490787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:24:26.540202  490787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:24:26.546116  490787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:24:26.546241  490787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:24:26.557298  490787 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:24:26.557332  490787 start.go:495] detecting cgroup driver to use...
	I1026 09:24:26.557450  490787 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:24:26.557524  490787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:24:26.575619  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:24:26.590571  490787 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:24:26.590689  490787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:24:26.608525  490787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:24:26.623977  490787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:24:26.802435  490787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:24:26.969928  490787 docker.go:234] disabling docker service ...
	I1026 09:24:26.970028  490787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:24:26.990089  490787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:24:27.010181  490787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:24:27.166764  490787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:24:27.324847  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:24:27.340792  490787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:24:27.361661  490787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:24:27.361757  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.373239  490787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:24:27.373354  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.385938  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.395816  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.405889  490787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:24:27.414834  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.424631  490787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.433717  490787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:24:27.451379  490787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:24:27.460499  490787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:24:27.468598  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:27.620184  490787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:24:28.155292  490787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:24:28.155394  490787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:24:28.159922  490787 start.go:563] Will wait 60s for crictl version
	I1026 09:24:28.160051  490787 ssh_runner.go:195] Run: which crictl
	I1026 09:24:28.164400  490787 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:24:28.229751  490787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:24:28.229895  490787 ssh_runner.go:195] Run: crio --version
	I1026 09:24:28.266872  490787 ssh_runner.go:195] Run: crio --version
	I1026 09:24:28.309425  490787 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:24:28.312528  490787 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-289159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:24:28.329418  490787 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:24:28.333423  490787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:24:28.343760  490787 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:24:28.343881  490787 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:24:28.343932  490787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:24:28.402805  490787 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:24:28.402826  490787 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:24:28.402877  490787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:24:28.462547  490787 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:24:28.462566  490787 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:24:28.462574  490787 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1026 09:24:28.462671  490787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-289159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:24:28.462777  490787 ssh_runner.go:195] Run: crio config
	I1026 09:24:28.560956  490787 cni.go:84] Creating CNI manager for ""
	I1026 09:24:28.560989  490787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:24:28.561007  490787 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:24:28.561058  490787 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-289159 NodeName:default-k8s-diff-port-289159 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:24:28.561246  490787 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-289159"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:24:28.561356  490787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:24:28.570801  490787 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:24:28.570919  490787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:24:28.579222  490787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1026 09:24:28.593201  490787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:24:28.607738  490787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1026 09:24:28.628571  490787 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:24:28.633146  490787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:24:28.643623  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:28.792756  490787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:28.809460  490787 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159 for IP: 192.168.85.2
	I1026 09:24:28.809492  490787 certs.go:195] generating shared ca certs ...
	I1026 09:24:28.809510  490787 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:28.809729  490787 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:24:28.809811  490787 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:24:28.809827  490787 certs.go:257] generating profile certs ...
	I1026 09:24:28.809953  490787 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.key
	I1026 09:24:28.810067  490787 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key.65278fd2
	I1026 09:24:28.810141  490787 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key
	I1026 09:24:28.810300  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:24:28.810365  490787 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:24:28.810384  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:24:28.810429  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:24:28.810474  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:24:28.810520  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:24:28.810601  490787 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:24:28.811510  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:24:28.886850  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:24:28.997139  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:24:29.088225  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:24:29.121387  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 09:24:29.148001  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:24:29.178815  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:24:29.201294  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:24:29.233958  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:24:29.272878  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:24:29.303117  490787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:24:29.332806  490787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:24:29.355686  490787 ssh_runner.go:195] Run: openssl version
	I1026 09:24:29.363914  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:24:29.379060  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.385613  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.385726  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:24:29.428435  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:24:29.437744  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:24:29.447881  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.452266  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.452362  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:24:29.559533  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:24:29.583091  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:24:29.601328  490787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.611717  490787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.611809  490787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:24:29.702081  490787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:24:29.718188  490787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:24:29.729270  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:24:29.815864  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:24:29.898693  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:24:29.981711  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:24:30.111912  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:24:30.239681  490787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:24:30.312576  490787 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-289159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-289159 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:24:30.312719  490787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:24:30.312818  490787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:24:30.461203  490787 cri.go:89] found id: "003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826"
	I1026 09:24:30.461281  490787 cri.go:89] found id: "4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75"
	I1026 09:24:30.461310  490787 cri.go:89] found id: "958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e"
	I1026 09:24:30.461329  490787 cri.go:89] found id: "97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6"
	I1026 09:24:30.461349  490787 cri.go:89] found id: ""
	I1026 09:24:30.461429  490787 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:24:30.500984  490787 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:24:30Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:24:30.501105  490787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:24:30.517173  490787 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:24:30.517196  490787 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:24:30.517278  490787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:24:30.531802  490787 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:24:30.532536  490787 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-289159" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:30.532875  490787 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-289159" cluster setting kubeconfig missing "default-k8s-diff-port-289159" context setting]
	I1026 09:24:30.533408  490787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.535347  490787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:24:30.555578  490787 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:24:30.555645  490787 kubeadm.go:601] duration metric: took 38.440725ms to restartPrimaryControlPlane
	I1026 09:24:30.555662  490787 kubeadm.go:402] duration metric: took 243.097489ms to StartCluster
	I1026 09:24:30.555678  490787 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.555787  490787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:24:30.556842  490787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:24:30.557115  490787 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:24:30.557461  490787 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:24:30.557542  490787 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.557560  490787 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.557567  490787 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:24:30.557589  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.558023  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.558640  490787 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:24:30.558772  490787 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.558808  490787 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.558834  490787 addons.go:247] addon dashboard should already be in state true
	I1026 09:24:30.558874  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.559029  490787 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-289159"
	I1026 09:24:30.559069  490787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-289159"
	I1026 09:24:30.559415  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.559645  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.580493  490787 out.go:179] * Verifying Kubernetes components...
	I1026 09:24:30.585503  490787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:24:30.611322  490787 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:24:30.615333  490787 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:24:30.619308  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:24:30.619337  490787 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:24:30.619400  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.624634  490787 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:24:30.629757  490787 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:30.629781  490787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:24:30.629848  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.632847  490787 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-289159"
	W1026 09:24:30.632869  490787 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:24:30.632894  490787 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:24:30.633305  490787 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:24:30.681126  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:30.698267  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:24:30.706442  490787 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:30.706470  490787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:24:30.706535  490787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:24:30.735908  490787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	W1026 09:24:28.831220  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:31.328855  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:31.122369  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:24:31.122410  490787 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:24:31.163069  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:24:31.179920  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:24:31.183593  490787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:24:31.286765  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:24:31.286788  490787 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:24:31.446453  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:24:31.446490  490787 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:24:31.552114  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:24:31.552146  490787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:24:31.681101  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:24:31.681131  490787 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:24:31.708592  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:24:31.708625  490787 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:24:31.738805  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:24:31.738875  490787 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:24:31.764662  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:24:31.764738  490787 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:24:31.788171  490787 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:24:31.788246  490787 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:24:31.826572  490787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 09:24:33.831237  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	W1026 09:24:36.321776  488173 pod_ready.go:104] pod "coredns-5dd5756b68-h6qmf" is not "Ready", error: <nil>
	I1026 09:24:37.238110  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.075003762s)
	I1026 09:24:39.008363  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.828405668s)
	I1026 09:24:39.008428  490787 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.824811275s)
	I1026 09:24:39.008461  490787 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:24:39.008773  490787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.182096849s)
	I1026 09:24:39.011761  490787 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-289159 addons enable metrics-server
	
	I1026 09:24:39.014352  490787 node_ready.go:49] node "default-k8s-diff-port-289159" is "Ready"
	I1026 09:24:39.014409  490787 node_ready.go:38] duration metric: took 5.900819ms for node "default-k8s-diff-port-289159" to be "Ready" ...
	I1026 09:24:39.014425  490787 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:24:39.014502  490787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:24:39.017756  490787 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 09:24:38.325985  488173 pod_ready.go:94] pod "coredns-5dd5756b68-h6qmf" is "Ready"
	I1026 09:24:38.326017  488173 pod_ready.go:86] duration metric: took 28.010050296s for pod "coredns-5dd5756b68-h6qmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.332197  488173 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.340000  488173 pod_ready.go:94] pod "etcd-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.340041  488173 pod_ready.go:86] duration metric: took 7.805721ms for pod "etcd-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.344147  488173 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.355596  488173 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.355627  488173 pod_ready.go:86] duration metric: took 11.440648ms for pod "kube-apiserver-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.359714  488173 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.524252  488173 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-167519" is "Ready"
	I1026 09:24:38.524288  488173 pod_ready.go:86] duration metric: took 164.544167ms for pod "kube-controller-manager-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:38.725479  488173 pod_ready.go:83] waiting for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.123189  488173 pod_ready.go:94] pod "kube-proxy-nxhdx" is "Ready"
	I1026 09:24:39.123228  488173 pod_ready.go:86] duration metric: took 397.707344ms for pod "kube-proxy-nxhdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.324190  488173 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.723436  488173 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-167519" is "Ready"
	I1026 09:24:39.723468  488173 pod_ready.go:86] duration metric: took 399.187774ms for pod "kube-scheduler-old-k8s-version-167519" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:24:39.723481  488173 pod_ready.go:40] duration metric: took 29.412142997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:24:39.786148  488173 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 09:24:39.789489  488173 out.go:203] 
	W1026 09:24:39.792428  488173 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 09:24:39.795360  488173 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 09:24:39.798373  488173 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-167519" cluster and "default" namespace by default
	I1026 09:24:39.020671  490787 addons.go:514] duration metric: took 8.463195004s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 09:24:39.036018  490787 api_server.go:72] duration metric: took 8.478857212s to wait for apiserver process to appear ...
	I1026 09:24:39.036053  490787 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:24:39.036073  490787 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1026 09:24:39.045306  490787 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1026 09:24:39.047328  490787 api_server.go:141] control plane version: v1.34.1
	I1026 09:24:39.047367  490787 api_server.go:131] duration metric: took 11.306673ms to wait for apiserver health ...
	I1026 09:24:39.047377  490787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:24:39.052684  490787 system_pods.go:59] 8 kube-system pods found
	I1026 09:24:39.052726  490787 system_pods.go:61] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:24:39.052736  490787 system_pods.go:61] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:24:39.052742  490787 system_pods.go:61] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:24:39.052750  490787 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:24:39.052758  490787 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:24:39.052767  490787 system_pods.go:61] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:24:39.052775  490787 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:24:39.052788  490787 system_pods.go:61] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:24:39.052794  490787 system_pods.go:74] duration metric: took 5.410966ms to wait for pod list to return data ...
	I1026 09:24:39.052802  490787 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:24:39.055999  490787 default_sa.go:45] found service account: "default"
	I1026 09:24:39.056025  490787 default_sa.go:55] duration metric: took 3.212661ms for default service account to be created ...
	I1026 09:24:39.056043  490787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:24:39.059018  490787 system_pods.go:86] 8 kube-system pods found
	I1026 09:24:39.059050  490787 system_pods.go:89] "coredns-66bc5c9577-szwxb" [1ed38531-1f76-46dd-a820-dbd4bfafbfb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:24:39.059060  490787 system_pods.go:89] "etcd-default-k8s-diff-port-289159" [3031feda-68ed-4a86-ad1b-0662e57f9b68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:24:39.059066  490787 system_pods.go:89] "kindnet-7kfgn" [5264ae13-85bc-421f-944d-439d3eb74d24] Running
	I1026 09:24:39.059073  490787 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289159" [2d55bcad-7261-496c-8952-81f752b22ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:24:39.059080  490787 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289159" [2a27c819-adc4-4b9a-9cdf-373d1197e942] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:24:39.059089  490787 system_pods.go:89] "kube-proxy-kzrr9" [8c20778a-d858-442a-bf2f-03c3e155dcd9] Running
	I1026 09:24:39.059096  490787 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289159" [317e9e05-adaf-488f-803e-b56ecf1dc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:24:39.059111  490787 system_pods.go:89] "storage-provisioner" [976e1cd6-3736-49e5-a1da-1d28250279ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:24:39.059118  490787 system_pods.go:126] duration metric: took 3.069767ms to wait for k8s-apps to be running ...
	I1026 09:24:39.059126  490787 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:24:39.059180  490787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:24:39.074284  490787 system_svc.go:56] duration metric: took 15.14847ms WaitForService to wait for kubelet
	I1026 09:24:39.074316  490787 kubeadm.go:586] duration metric: took 8.51716086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:24:39.074337  490787 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:24:39.077671  490787 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:24:39.077717  490787 node_conditions.go:123] node cpu capacity is 2
	I1026 09:24:39.077730  490787 node_conditions.go:105] duration metric: took 3.388097ms to run NodePressure ...
	I1026 09:24:39.077745  490787 start.go:241] waiting for startup goroutines ...
	I1026 09:24:39.077753  490787 start.go:246] waiting for cluster config update ...
	I1026 09:24:39.077767  490787 start.go:255] writing updated cluster config ...
	I1026 09:24:39.078059  490787 ssh_runner.go:195] Run: rm -f paused
	I1026 09:24:39.083509  490787 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:24:39.092836  490787 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:24:41.105209  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:43.600503  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:45.602032  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:47.604109  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:50.100554  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:52.604452  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:24:55.101268  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.548663649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.558103344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.559188256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.581762783Z" level=info msg="Created container 092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper" id=1ecac2fb-f7e8-41eb-9a26-558a0ae443ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.583116639Z" level=info msg="Starting container: 092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483" id=4191061e-62d9-4507-a1ed-ff20655160df name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.585130211Z" level=info msg="Started container" PID=1660 containerID=092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper id=4191061e-62d9-4507-a1ed-ff20655160df name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf
	Oct 26 09:24:45 old-k8s-version-167519 conmon[1658]: conmon 092688620d13cf3367c5 <ninfo>: container 1660 exited with status 1
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.743041991Z" level=info msg="Removing container: 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.757080876Z" level=info msg="Error loading conmon cgroup of container 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef: cgroup deleted" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:45 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:45.761414191Z" level=info msg="Removed container 7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s/dashboard-metrics-scraper" id=7f933fc3-7003-4286-aeb9-c02befc1be19 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.249310923Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254943045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254977441Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.254998389Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260719571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260891183Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.260967328Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267675386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267837423Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.267912697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.272689842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.273841791Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.273935758Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.282799831Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:24:47 old-k8s-version-167519 crio[649]: time="2025-10-26T09:24:47.283509739Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	092688620d13c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   ba9172b61d371       dashboard-metrics-scraper-5f989dc9cf-nc85s       kubernetes-dashboard
	f594970607af9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   f446c9ec29cc2       storage-provisioner                              kube-system
	1d8d480301582       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   24 seconds ago      Running             kubernetes-dashboard        0                   47e71626540fa       kubernetes-dashboard-8694d4445c-2z5gd            kubernetes-dashboard
	cef106e961046       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   6878946b02ee2       coredns-5dd5756b68-h6qmf                         kube-system
	c15736d898229       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   65099d311817c       busybox                                          default
	72253558ec19e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   5138ce82347ae       kube-proxy-nxhdx                                 kube-system
	25a2d3d596357       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   313bb3731313a       kindnet-ljrzw                                    kube-system
	661f9947ec075       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   f446c9ec29cc2       storage-provisioner                              kube-system
	71eca1ab06e9f       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   19b8bd1065fa2       kube-apiserver-old-k8s-version-167519            kube-system
	d6f0b67fc7a92       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   a224cebad46e1       kube-scheduler-old-k8s-version-167519            kube-system
	ab3db05a45deb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   737a3957e93ca       etcd-old-k8s-version-167519                      kube-system
	ebcf8b7c4e306       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   a38684c579e66       kube-controller-manager-old-k8s-version-167519   kube-system
	
	
	==> coredns [cef106e961046296a2cb95911ff65cc35c4668e21eee6d64266403c4b0250c33] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43367 - 32669 "HINFO IN 5005322454416716139.758283188886650633. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014287314s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-167519
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-167519
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-167519
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:22:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-167519
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:24:37 +0000   Sun, 26 Oct 2025 09:23:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-167519
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1a149092-d049-4ee0-944f-a1babc9259c8
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-h6qmf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-167519                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-ljrzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-167519             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-167519    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-nxhdx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-167519             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nc85s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2z5gd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 106s                   kube-proxy       
	  Normal  Starting                 47s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m                     kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                     kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                     kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           108s                   node-controller  Node old-k8s-version-167519 event: Registered Node old-k8s-version-167519 in Controller
	  Normal  NodeReady                92s                    kubelet          Node old-k8s-version-167519 status is now: NodeReady
	  Normal  Starting                 57s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node old-k8s-version-167519 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                    node-controller  Node old-k8s-version-167519 event: Registered Node old-k8s-version-167519 in Controller
	
	
	==> dmesg <==
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab3db05a45deb6bea25b1d1de0e1072710d4748379c32ed072990766bd661dd3] <==
	{"level":"info","ts":"2025-10-26T09:24:01.306961Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T09:24:01.307057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:24:01.307083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T09:24:01.310911Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.316803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.317087Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T09:24:01.344414Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:24:01.349941Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T09:24:01.34435Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T09:24:01.355474Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T09:24:01.355582Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T09:24:02.278818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T09:24:02.278908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.278932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T09:24:02.293987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-167519 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T09:24:02.294031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:24:02.295001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T09:24:02.295141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T09:24:02.295931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T09:24:02.322855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T09:24:02.323168Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:24:57 up  3:07,  0 user,  load average: 4.00, 3.13, 2.75
	Linux old-k8s-version-167519 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [25a2d3d5963571b5d87758e7d01d3e8fbafe81732722b0e6ad290d688e909afa] <==
	I1026 09:24:07.015839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:24:07.099147       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:24:07.099353       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:24:07.099395       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:24:07.099448       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:24:07.304730       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:24:07.304759       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:24:07.304767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:24:07.305054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:24:37.249321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:24:37.305058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:24:37.305305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:24:37.305379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1026 09:24:38.505512       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:24:38.505641       1 metrics.go:72] Registering metrics
	I1026 09:24:38.505734       1 controller.go:711] "Syncing nftables rules"
	I1026 09:24:47.248215       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:24:47.248402       1 main.go:301] handling current node
	I1026 09:24:57.251126       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:24:57.251156       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71eca1ab06e9f401fcbb26b13ea7782bc9fb8408408ae068731fb754c9192995] <==
	I1026 09:24:06.655972       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:24:06.657512       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 09:24:06.663967       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 09:24:06.666509       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 09:24:06.666584       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:24:06.680903       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 09:24:06.681588       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 09:24:06.683012       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 09:24:06.683070       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 09:24:06.683108       1 aggregator.go:166] initial CRD sync complete...
	I1026 09:24:06.683114       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 09:24:06.683119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:24:06.683125       1 cache.go:39] Caches are synced for autoregister controller
	E1026 09:24:06.975952       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:24:07.136491       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:24:10.053892       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 09:24:10.119039       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 09:24:10.152379       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:24:10.167028       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:24:10.177417       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 09:24:10.232695       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.154.219"}
	I1026 09:24:10.254170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.217.244"}
	I1026 09:24:20.257113       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 09:24:20.273723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:24:20.286254       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ebcf8b7c4e3060a1abd28a4f831dbec6225a03e23149c701a93b6a01c65593bc] <==
	I1026 09:24:20.324296       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 09:24:20.333966       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nc85s"
	I1026 09:24:20.334391       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-2z5gd"
	I1026 09:24:20.342880       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 09:24:20.353498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.554386ms"
	I1026 09:24:20.374316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.313608ms"
	I1026 09:24:20.387100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.550507ms"
	I1026 09:24:20.387480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.203µs"
	I1026 09:24:20.395255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="179.686µs"
	I1026 09:24:20.411242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.951289ms"
	I1026 09:24:20.411398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.222µs"
	I1026 09:24:20.427426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.092µs"
	I1026 09:24:20.445862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.893µs"
	I1026 09:24:20.679932       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:24:20.679966       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 09:24:20.740666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 09:24:25.681264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.516µs"
	I1026 09:24:26.683317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.944µs"
	I1026 09:24:27.709486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.127µs"
	I1026 09:24:33.759511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.562855ms"
	I1026 09:24:33.760473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.619µs"
	I1026 09:24:38.086265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.443743ms"
	I1026 09:24:38.087314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.672µs"
	I1026 09:24:45.762096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.607µs"
	I1026 09:24:50.711974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.316µs"
	
	
	==> kube-proxy [72253558ec19ee7592abca9835453e0f5cc9ab93df04418f1022780e0b3e9acb] <==
	I1026 09:24:09.036971       1 server_others.go:69] "Using iptables proxy"
	I1026 09:24:09.093456       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 09:24:09.639581       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:24:09.641882       1 server_others.go:152] "Using iptables Proxier"
	I1026 09:24:09.641974       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 09:24:09.642005       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 09:24:09.642059       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 09:24:09.642291       1 server.go:846] "Version info" version="v1.28.0"
	I1026 09:24:09.642484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:09.683590       1 config.go:188] "Starting service config controller"
	I1026 09:24:09.683621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 09:24:09.683639       1 config.go:97] "Starting endpoint slice config controller"
	I1026 09:24:09.683642       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 09:24:09.688038       1 config.go:315] "Starting node config controller"
	I1026 09:24:09.688061       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 09:24:09.783762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 09:24:09.783814       1 shared_informer.go:318] Caches are synced for service config
	I1026 09:24:09.809315       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d6f0b67fc7a92431594127a805c5e2a9df01b5bdd70421309c258bb58ff6bfe6] <==
	I1026 09:24:05.048420       1 serving.go:348] Generated self-signed cert in-memory
	I1026 09:24:09.546126       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 09:24:09.546237       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:09.553728       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 09:24:09.553915       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 09:24:09.553956       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 09:24:09.554000       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 09:24:09.574572       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:09.579545       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 09:24:09.579741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:09.579957       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 09:24:09.655362       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 09:24:09.682830       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 09:24:09.682904       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474089     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/231051f5-8219-4d3d-8e81-8c0c018c2ab0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nc85s\" (UID: \"231051f5-8219-4d3d-8e81-8c0c018c2ab0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474252     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdnsd\" (UniqueName: \"kubernetes.io/projected/fab603ff-4a4d-4c2c-9dc6-16afee3b82cc-kube-api-access-zdnsd\") pod \"kubernetes-dashboard-8694d4445c-2z5gd\" (UID: \"fab603ff-4a4d-4c2c-9dc6-16afee3b82cc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: I1026 09:24:20.474387     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fab603ff-4a4d-4c2c-9dc6-16afee3b82cc-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2z5gd\" (UID: \"fab603ff-4a4d-4c2c-9dc6-16afee3b82cc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd"
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: W1026 09:24:20.755038     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf WatchSource:0}: Error finding container ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf: Status 404 returned error can't find the container with id ba9172b61d3714e44f0acdac64802658d7fd9bc2de1f163ad88ea1e88ddebadf
	Oct 26 09:24:20 old-k8s-version-167519 kubelet[775]: W1026 09:24:20.761644     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f43cbb714de44c681d1c3d2f7084a21de7672ca7c2fff08d2ad105996ff50be2/crio-47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c WatchSource:0}: Error finding container 47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c: Status 404 returned error can't find the container with id 47e71626540fa0422eca00a64fd5f7aa26139aea0b2ff08cfc177d519371610c
	Oct 26 09:24:25 old-k8s-version-167519 kubelet[775]: I1026 09:24:25.650866     775 scope.go:117] "RemoveContainer" containerID="c2b1f682f66434f97b55952cf1907048974b8ef9fb796e33d4d87fdd1ca11021"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: I1026 09:24:26.655033     775 scope.go:117] "RemoveContainer" containerID="c2b1f682f66434f97b55952cf1907048974b8ef9fb796e33d4d87fdd1ca11021"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: I1026 09:24:26.655409     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:26 old-k8s-version-167519 kubelet[775]: E1026 09:24:26.655728     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:27 old-k8s-version-167519 kubelet[775]: I1026 09:24:27.666902     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:27 old-k8s-version-167519 kubelet[775]: E1026 09:24:27.667209     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:30 old-k8s-version-167519 kubelet[775]: I1026 09:24:30.691287     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:30 old-k8s-version-167519 kubelet[775]: E1026 09:24:30.691640     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:37 old-k8s-version-167519 kubelet[775]: I1026 09:24:37.715763     775 scope.go:117] "RemoveContainer" containerID="661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe"
	Oct 26 09:24:37 old-k8s-version-167519 kubelet[775]: I1026 09:24:37.757967     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2z5gd" podStartSLOduration=5.721598618 podCreationTimestamp="2025-10-26 09:24:20 +0000 UTC" firstStartedPulling="2025-10-26 09:24:20.770915966 +0000 UTC m=+20.629990141" lastFinishedPulling="2025-10-26 09:24:32.807212176 +0000 UTC m=+32.666286343" observedRunningTime="2025-10-26 09:24:33.727965138 +0000 UTC m=+33.587039305" watchObservedRunningTime="2025-10-26 09:24:37.75789482 +0000 UTC m=+37.616968995"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.545032     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.737965     775 scope.go:117] "RemoveContainer" containerID="7ef879b57e2b3670028ec13251c517feba0cb7ebcdf1ec313d6f44258be62aef"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: I1026 09:24:45.738244     775 scope.go:117] "RemoveContainer" containerID="092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	Oct 26 09:24:45 old-k8s-version-167519 kubelet[775]: E1026 09:24:45.738531     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:50 old-k8s-version-167519 kubelet[775]: I1026 09:24:50.691231     775 scope.go:117] "RemoveContainer" containerID="092688620d13cf3367c5f11e879f2f95f2091fc713e91af3e5db4d33776f2483"
	Oct 26 09:24:50 old-k8s-version-167519 kubelet[775]: E1026 09:24:50.692030     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nc85s_kubernetes-dashboard(231051f5-8219-4d3d-8e81-8c0c018c2ab0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nc85s" podUID="231051f5-8219-4d3d-8e81-8c0c018c2ab0"
	Oct 26 09:24:52 old-k8s-version-167519 kubelet[775]: I1026 09:24:52.188168     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:24:52 old-k8s-version-167519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1d8d4803015824aa94be1a8ee92ed81a5ee87510fed7adf0d20b310895cc7673] <==
	2025/10/26 09:24:32 Starting overwatch
	2025/10/26 09:24:32 Using namespace: kubernetes-dashboard
	2025/10/26 09:24:32 Using in-cluster config to connect to apiserver
	2025/10/26 09:24:32 Using secret token for csrf signing
	2025/10/26 09:24:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:24:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:24:32 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 09:24:32 Generating JWE encryption key
	2025/10/26 09:24:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:24:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:24:33 Initializing JWE encryption key from synchronized object
	2025/10/26 09:24:33 Creating in-cluster Sidecar client
	2025/10/26 09:24:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:24:33 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [661f9947ec07596e5d89da75da0083ccdbf2a35dcbab1d596416f862ddda6efe] <==
	I1026 09:24:07.521974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:24:37.524345       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f594970607af9f64c654df6707c1df4091dfe2988957faf28b530298bae0041c] <==
	I1026 09:24:37.824830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:24:37.856458       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:24:37.856502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 09:24:55.308610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:24:55.308681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21d96fe7-10ce-4078-9698-96debabaa3e8", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-167519_586f38bd-c1d8-480b-8390-da2cf031c676 became leader
	I1026 09:24:55.308781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-167519_586f38bd-c1d8-480b-8390-da2cf031c676!
	I1026 09:24:55.409218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-167519_586f38bd-c1d8-480b-8390-da2cf031c676!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-167519 -n old-k8s-version-167519
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-167519 -n old-k8s-version-167519: exit status 2 (396.841635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-167519 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-289159 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-289159 --alsologtostderr -v=1: exit status 80 (2.260547144s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-289159 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:25:24.085511  496612 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:25:24.085701  496612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:24.085713  496612 out.go:374] Setting ErrFile to fd 2...
	I1026 09:25:24.085726  496612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:24.086007  496612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:25:24.086280  496612 out.go:368] Setting JSON to false
	I1026 09:25:24.086297  496612 mustload.go:65] Loading cluster: default-k8s-diff-port-289159
	I1026 09:25:24.086674  496612 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:24.087186  496612 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-289159 --format={{.State.Status}}
	I1026 09:25:24.107153  496612 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:25:24.107487  496612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:24.200855  496612 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-26 09:25:24.190310315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:24.203060  496612 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-289159 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:25:24.206447  496612 out.go:179] * Pausing node default-k8s-diff-port-289159 ... 
	I1026 09:25:24.209418  496612 host.go:66] Checking if "default-k8s-diff-port-289159" exists ...
	I1026 09:25:24.209772  496612 ssh_runner.go:195] Run: systemctl --version
	I1026 09:25:24.209824  496612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-289159
	I1026 09:25:24.246567  496612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/default-k8s-diff-port-289159/id_rsa Username:docker}
	I1026 09:25:24.358146  496612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:25:24.387397  496612 pause.go:52] kubelet running: true
	I1026 09:25:24.387469  496612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:25:24.713829  496612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:25:24.713933  496612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:25:24.799979  496612 cri.go:89] found id: "1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead"
	I1026 09:25:24.800005  496612 cri.go:89] found id: "9a9388f4f5ac4949a732221eb509d21231678fe4155231bd49a140d1e9fae63d"
	I1026 09:25:24.800010  496612 cri.go:89] found id: "612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f"
	I1026 09:25:24.800014  496612 cri.go:89] found id: "5e27883c19db0827c49f4c2614c23bd2fe0b2b8872d0aa74eadd85b5d5df8d20"
	I1026 09:25:24.800026  496612 cri.go:89] found id: "2fc2cbd7f301a63755c482b1b3dd4679382cfa7037c64f021dba12297c96e575"
	I1026 09:25:24.800031  496612 cri.go:89] found id: "003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826"
	I1026 09:25:24.800034  496612 cri.go:89] found id: "4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75"
	I1026 09:25:24.800038  496612 cri.go:89] found id: "958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e"
	I1026 09:25:24.800041  496612 cri.go:89] found id: "97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6"
	I1026 09:25:24.800056  496612 cri.go:89] found id: "968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	I1026 09:25:24.800064  496612 cri.go:89] found id: "6c9d9c9cb391226f6310c9075eb7e3d3395c852ecd5bae121b4a476b9ec84c4a"
	I1026 09:25:24.800067  496612 cri.go:89] found id: ""
	I1026 09:25:24.800122  496612 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:25:24.811642  496612 retry.go:31] will retry after 330.67196ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:25:24Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:25:25.143319  496612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:25:25.162383  496612 pause.go:52] kubelet running: false
	I1026 09:25:25.162498  496612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:25:25.402414  496612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:25:25.402512  496612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:25:25.502549  496612 cri.go:89] found id: "1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead"
	I1026 09:25:25.502617  496612 cri.go:89] found id: "9a9388f4f5ac4949a732221eb509d21231678fe4155231bd49a140d1e9fae63d"
	I1026 09:25:25.502637  496612 cri.go:89] found id: "612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f"
	I1026 09:25:25.502655  496612 cri.go:89] found id: "5e27883c19db0827c49f4c2614c23bd2fe0b2b8872d0aa74eadd85b5d5df8d20"
	I1026 09:25:25.502674  496612 cri.go:89] found id: "2fc2cbd7f301a63755c482b1b3dd4679382cfa7037c64f021dba12297c96e575"
	I1026 09:25:25.502707  496612 cri.go:89] found id: "003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826"
	I1026 09:25:25.502753  496612 cri.go:89] found id: "4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75"
	I1026 09:25:25.502770  496612 cri.go:89] found id: "958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e"
	I1026 09:25:25.502789  496612 cri.go:89] found id: "97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6"
	I1026 09:25:25.502810  496612 cri.go:89] found id: "968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	I1026 09:25:25.502840  496612 cri.go:89] found id: "6c9d9c9cb391226f6310c9075eb7e3d3395c852ecd5bae121b4a476b9ec84c4a"
	I1026 09:25:25.502867  496612 cri.go:89] found id: ""
	I1026 09:25:25.502959  496612 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:25:25.515755  496612 retry.go:31] will retry after 360.448319ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:25:25Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:25:25.877407  496612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:25:25.900863  496612 pause.go:52] kubelet running: false
	I1026 09:25:25.901053  496612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:25:26.146075  496612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:25:26.146276  496612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:25:26.223029  496612 cri.go:89] found id: "1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead"
	I1026 09:25:26.223099  496612 cri.go:89] found id: "9a9388f4f5ac4949a732221eb509d21231678fe4155231bd49a140d1e9fae63d"
	I1026 09:25:26.223133  496612 cri.go:89] found id: "612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f"
	I1026 09:25:26.223157  496612 cri.go:89] found id: "5e27883c19db0827c49f4c2614c23bd2fe0b2b8872d0aa74eadd85b5d5df8d20"
	I1026 09:25:26.223178  496612 cri.go:89] found id: "2fc2cbd7f301a63755c482b1b3dd4679382cfa7037c64f021dba12297c96e575"
	I1026 09:25:26.223211  496612 cri.go:89] found id: "003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826"
	I1026 09:25:26.223234  496612 cri.go:89] found id: "4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75"
	I1026 09:25:26.223253  496612 cri.go:89] found id: "958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e"
	I1026 09:25:26.223271  496612 cri.go:89] found id: "97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6"
	I1026 09:25:26.223322  496612 cri.go:89] found id: "968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	I1026 09:25:26.223340  496612 cri.go:89] found id: "6c9d9c9cb391226f6310c9075eb7e3d3395c852ecd5bae121b4a476b9ec84c4a"
	I1026 09:25:26.223373  496612 cri.go:89] found id: ""
	I1026 09:25:26.223457  496612 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:25:26.238840  496612 out.go:203] 
	W1026 09:25:26.241842  496612 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:25:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:25:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:25:26.242033  496612 out.go:285] * 
	* 
	W1026 09:25:26.250072  496612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:25:26.253077  496612 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-289159 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-289159
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-289159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	        "Created": "2025-10-26T09:22:35.695576526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:24:21.11130871Z",
	            "FinishedAt": "2025-10-26T09:24:20.116648194Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hosts",
	        "LogPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67-json.log",
	        "Name": "/default-k8s-diff-port-289159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-289159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-289159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	                "LowerDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-289159",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-289159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-289159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b2e728367a8d3a532ebbd9d8fd74bdb98b5a0fddabfec5967aa20949a741d0b",
	            "SandboxKey": "/var/run/docker/netns/2b2e728367a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-289159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:af:b0:0b:9e:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788f8e4ab8525806628d59d0a963ab3ec20463b77ce93fefea997bd8290d71c3",
	                    "EndpointID": "5aa6a9236d364a14545bef7a1ef39022d1875a45aa0a8eaae288287fa95e9cc7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-289159",
	                        "e75dab2714ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159: exit status 2 (458.670099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25: (1.510971025s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:25:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:25:01.546147  494585 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:25:01.546275  494585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:01.546286  494585 out.go:374] Setting ErrFile to fd 2...
	I1026 09:25:01.546292  494585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:01.546543  494585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:25:01.547043  494585 out.go:368] Setting JSON to false
	I1026 09:25:01.548165  494585 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11252,"bootTime":1761459450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:25:01.548238  494585 start.go:141] virtualization:  
	I1026 09:25:01.552321  494585 out.go:179] * [embed-certs-204381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:25:01.556729  494585 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:25:01.556908  494585 notify.go:220] Checking for updates...
	I1026 09:25:01.566798  494585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:25:01.570149  494585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:25:01.573277  494585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:25:01.577089  494585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:25:01.580261  494585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:25:01.584011  494585 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:01.584155  494585 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:25:01.627085  494585 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:25:01.627214  494585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:01.689238  494585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:01.677953331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:01.689356  494585 docker.go:318] overlay module found
	I1026 09:25:01.692601  494585 out.go:179] * Using the docker driver based on user configuration
	I1026 09:25:01.695615  494585 start.go:305] selected driver: docker
	I1026 09:25:01.695645  494585 start.go:925] validating driver "docker" against <nil>
	I1026 09:25:01.695660  494585 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:25:01.696506  494585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:01.756507  494585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:01.747018296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:01.756658  494585 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:25:01.756906  494585 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:25:01.759829  494585 out.go:179] * Using Docker driver with root privileges
	I1026 09:25:01.762906  494585 cni.go:84] Creating CNI manager for ""
	I1026 09:25:01.762988  494585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:01.763002  494585 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:25:01.763094  494585 start.go:349] cluster config:
	{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:25:01.766473  494585 out.go:179] * Starting "embed-certs-204381" primary control-plane node in "embed-certs-204381" cluster
	I1026 09:25:01.769377  494585 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:25:01.772386  494585 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:25:01.775338  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:01.775427  494585 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:25:01.775441  494585 cache.go:58] Caching tarball of preloaded images
	I1026 09:25:01.775447  494585 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:25:01.775590  494585 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:25:01.775603  494585 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:25:01.775735  494585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json ...
	I1026 09:25:01.775781  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json: {Name:mk7979a4ff906b2642aec86dd01313a076c79266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:01.797224  494585 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:25:01.797248  494585 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:25:01.797270  494585 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:25:01.797296  494585 start.go:360] acquireMachinesLock for embed-certs-204381: {Name:mkd161c65630ff13edac2ff621a7dae8e5ffecd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:01.797413  494585 start.go:364] duration metric: took 100.883µs to acquireMachinesLock for "embed-certs-204381"
	I1026 09:25:01.797440  494585 start.go:93] Provisioning new machine with config: &{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:25:01.797509  494585 start.go:125] createHost starting for "" (driver="docker")
	W1026 09:25:02.600834  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:25:05.100713  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	I1026 09:25:01.801039  494585 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:25:01.801305  494585 start.go:159] libmachine.API.Create for "embed-certs-204381" (driver="docker")
	I1026 09:25:01.801357  494585 client.go:168] LocalClient.Create starting
	I1026 09:25:01.801449  494585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:25:01.801491  494585 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:01.801508  494585 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:01.801562  494585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:25:01.801586  494585 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:01.801600  494585 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:01.801992  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:25:01.819246  494585 cli_runner.go:211] docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:25:01.819339  494585 network_create.go:284] running [docker network inspect embed-certs-204381] to gather additional debugging logs...
	I1026 09:25:01.819362  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381
	W1026 09:25:01.836756  494585 cli_runner.go:211] docker network inspect embed-certs-204381 returned with exit code 1
	I1026 09:25:01.836792  494585 network_create.go:287] error running [docker network inspect embed-certs-204381]: docker network inspect embed-certs-204381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-204381 not found
	I1026 09:25:01.836812  494585 network_create.go:289] output of [docker network inspect embed-certs-204381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-204381 not found
	
	** /stderr **
	I1026 09:25:01.836932  494585 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:01.855318  494585 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:25:01.855692  494585 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:25:01.855953  494585 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:25:01.856392  494585 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cdc20}
	I1026 09:25:01.856415  494585 network_create.go:124] attempt to create docker network embed-certs-204381 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 09:25:01.856470  494585 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-204381 embed-certs-204381
	I1026 09:25:01.922566  494585 network_create.go:108] docker network embed-certs-204381 192.168.76.0/24 created
	I1026 09:25:01.922602  494585 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-204381" container
	I1026 09:25:01.922704  494585 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:25:01.939146  494585 cli_runner.go:164] Run: docker volume create embed-certs-204381 --label name.minikube.sigs.k8s.io=embed-certs-204381 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:25:01.958698  494585 oci.go:103] Successfully created a docker volume embed-certs-204381
	I1026 09:25:01.958822  494585 cli_runner.go:164] Run: docker run --rm --name embed-certs-204381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-204381 --entrypoint /usr/bin/test -v embed-certs-204381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:25:02.552687  494585 oci.go:107] Successfully prepared a docker volume embed-certs-204381
	I1026 09:25:02.552739  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:02.552759  494585 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:25:02.552844  494585 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-204381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 09:25:07.598630  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:25:09.599931  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	I1026 09:25:10.598166  490787 pod_ready.go:94] pod "coredns-66bc5c9577-szwxb" is "Ready"
	I1026 09:25:10.598195  490787 pod_ready.go:86] duration metric: took 31.505329014s for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.601269  490787 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.606448  490787 pod_ready.go:94] pod "etcd-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.606476  490787 pod_ready.go:86] duration metric: took 5.128616ms for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.609028  490787 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.614599  490787 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.614632  490787 pod_ready.go:86] duration metric: took 5.530262ms for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.616954  490787 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:06.945139  494585 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-204381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.392251432s)
	I1026 09:25:06.945187  494585 kic.go:203] duration metric: took 4.392424473s to extract preloaded images to volume ...
	W1026 09:25:06.945319  494585 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:25:06.945440  494585 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:25:07.020669  494585 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-204381 --name embed-certs-204381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-204381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-204381 --network embed-certs-204381 --ip 192.168.76.2 --volume embed-certs-204381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:25:07.348129  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Running}}
	I1026 09:25:07.367547  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:07.390128  494585 cli_runner.go:164] Run: docker exec embed-certs-204381 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:25:07.444996  494585 oci.go:144] the created container "embed-certs-204381" has a running status.
	I1026 09:25:07.445035  494585 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa...
	I1026 09:25:08.227304  494585 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:25:08.248278  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:08.265129  494585 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:25:08.265154  494585 kic_runner.go:114] Args: [docker exec --privileged embed-certs-204381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:25:08.307342  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:08.327696  494585 machine.go:93] provisionDockerMachine start ...
	I1026 09:25:08.327825  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:08.346701  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:08.347203  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:08.347221  494585 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:25:08.347901  494585 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60422->127.0.0.1:33440: read: connection reset by peer
	I1026 09:25:11.498531  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-204381
	
	I1026 09:25:11.498595  494585 ubuntu.go:182] provisioning hostname "embed-certs-204381"
	I1026 09:25:11.498668  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:11.516316  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:11.516633  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:11.516652  494585 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-204381 && echo "embed-certs-204381" | sudo tee /etc/hostname
	I1026 09:25:10.796677  490787 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.796710  490787 pod_ready.go:86] duration metric: took 179.724243ms for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.996789  490787 pod_ready.go:83] waiting for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.395964  490787 pod_ready.go:94] pod "kube-proxy-kzrr9" is "Ready"
	I1026 09:25:11.396047  490787 pod_ready.go:86] duration metric: took 399.222849ms for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.596928  490787 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.996845  490787 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:11.996878  490787 pod_ready.go:86] duration metric: took 399.92534ms for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.996892  490787 pod_ready.go:40] duration metric: took 32.913347782s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:25:12.088215  490787 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:25:12.091674  490787 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-289159" cluster and "default" namespace by default
	I1026 09:25:11.684937  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-204381
	
	I1026 09:25:11.685019  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:11.703085  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:11.703406  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:11.703432  494585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-204381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-204381/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-204381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:25:11.854998  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:25:11.855022  494585 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:25:11.855046  494585 ubuntu.go:190] setting up certificates
	I1026 09:25:11.855057  494585 provision.go:84] configureAuth start
	I1026 09:25:11.855123  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:11.872807  494585 provision.go:143] copyHostCerts
	I1026 09:25:11.872874  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:25:11.872884  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:25:11.872967  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:25:11.873073  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:25:11.873078  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:25:11.873104  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:25:11.873152  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:25:11.873158  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:25:11.873182  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:25:11.873226  494585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.embed-certs-204381 san=[127.0.0.1 192.168.76.2 embed-certs-204381 localhost minikube]
	I1026 09:25:12.784639  494585 provision.go:177] copyRemoteCerts
	I1026 09:25:12.784714  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:25:12.784763  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:12.802482  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:12.906470  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:25:12.924125  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 09:25:12.942876  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:25:12.962191  494585 provision.go:87] duration metric: took 1.107120342s to configureAuth
	I1026 09:25:12.962219  494585 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:25:12.962407  494585 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:12.962524  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:12.982188  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:12.982495  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:12.982509  494585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:25:13.331752  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:25:13.331818  494585 machine.go:96] duration metric: took 5.004096726s to provisionDockerMachine
	I1026 09:25:13.331842  494585 client.go:171] duration metric: took 11.530474783s to LocalClient.Create
	I1026 09:25:13.331896  494585 start.go:167] duration metric: took 11.530592003s to libmachine.API.Create "embed-certs-204381"
	I1026 09:25:13.331922  494585 start.go:293] postStartSetup for "embed-certs-204381" (driver="docker")
	I1026 09:25:13.331945  494585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:25:13.332060  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:25:13.332147  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.350589  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.458950  494585 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:25:13.462267  494585 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:25:13.462296  494585 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:25:13.462308  494585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:25:13.462362  494585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:25:13.462452  494585 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:25:13.462570  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:25:13.470190  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:25:13.489226  494585 start.go:296] duration metric: took 157.277142ms for postStartSetup
	I1026 09:25:13.489602  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:13.509349  494585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json ...
	I1026 09:25:13.509651  494585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:25:13.509693  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.528281  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.632280  494585 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:25:13.637306  494585 start.go:128] duration metric: took 11.839781918s to createHost
	I1026 09:25:13.637329  494585 start.go:83] releasing machines lock for "embed-certs-204381", held for 11.839906729s
	I1026 09:25:13.637406  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:13.654853  494585 ssh_runner.go:195] Run: cat /version.json
	I1026 09:25:13.654890  494585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:25:13.654905  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.654944  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.683685  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.696396  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.894305  494585 ssh_runner.go:195] Run: systemctl --version
	I1026 09:25:13.900936  494585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:25:13.940804  494585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:25:13.945304  494585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:25:13.945422  494585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:25:13.981871  494585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:25:13.981896  494585 start.go:495] detecting cgroup driver to use...
	I1026 09:25:13.981953  494585 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:25:13.982041  494585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:25:14.002598  494585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:25:14.017995  494585 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:25:14.018064  494585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:25:14.037517  494585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:25:14.057890  494585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:25:14.179435  494585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:25:14.308301  494585 docker.go:234] disabling docker service ...
	I1026 09:25:14.308366  494585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:25:14.330208  494585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:25:14.343352  494585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:25:14.458954  494585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:25:14.576769  494585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:25:14.590528  494585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:25:14.612884  494585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:25:14.613012  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.623609  494585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:25:14.623727  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.632843  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.641454  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.650635  494585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:25:14.659150  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.667801  494585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.681422  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.690783  494585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:25:14.699258  494585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:25:14.707169  494585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:14.814161  494585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:25:14.947634  494585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:25:14.947735  494585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:25:14.952392  494585 start.go:563] Will wait 60s for crictl version
	I1026 09:25:14.952514  494585 ssh_runner.go:195] Run: which crictl
	I1026 09:25:14.956891  494585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:25:14.985960  494585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:25:14.986105  494585 ssh_runner.go:195] Run: crio --version
	I1026 09:25:15.037061  494585 ssh_runner.go:195] Run: crio --version
	I1026 09:25:15.079676  494585 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:25:15.082420  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:15.101459  494585 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:25:15.106355  494585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:25:15.118203  494585 kubeadm.go:883] updating cluster {Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:25:15.118324  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:15.118393  494585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:25:15.165127  494585 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:25:15.165151  494585 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:25:15.165246  494585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:25:15.192476  494585 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:25:15.192501  494585 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:25:15.192511  494585 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:25:15.192616  494585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-204381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:25:15.192714  494585 ssh_runner.go:195] Run: crio config
	I1026 09:25:15.264960  494585 cni.go:84] Creating CNI manager for ""
	I1026 09:25:15.265030  494585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:15.265056  494585 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:25:15.265087  494585 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-204381 NodeName:embed-certs-204381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:25:15.265222  494585 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-204381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:25:15.265333  494585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:25:15.273736  494585 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:25:15.273804  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:25:15.282000  494585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 09:25:15.295722  494585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:25:15.309756  494585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 09:25:15.323035  494585 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:25:15.326620  494585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:25:15.337757  494585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:15.466310  494585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:25:15.485872  494585 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381 for IP: 192.168.76.2
	I1026 09:25:15.485895  494585 certs.go:195] generating shared ca certs ...
	I1026 09:25:15.485913  494585 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.486151  494585 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:25:15.486239  494585 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:25:15.486265  494585 certs.go:257] generating profile certs ...
	I1026 09:25:15.486343  494585 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key
	I1026 09:25:15.486362  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt with IP's: []
	I1026 09:25:15.550543  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt ...
	I1026 09:25:15.550576  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt: {Name:mkb3891aa55996d28e4efd8d81da5448a7f48836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.550831  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key ...
	I1026 09:25:15.550846  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key: {Name:mk0df235d777b934de26a0721f66018190c9e01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.550953  494585 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a
	I1026 09:25:15.550969  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 09:25:16.192875  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a ...
	I1026 09:25:16.192913  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a: {Name:mkcdbaee7abefa7de07b79b0a507892fbc8b542b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.193145  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a ...
	I1026 09:25:16.193163  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a: {Name:mk3c51555d42ba4158a227514a9a9f0944d361d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.193254  494585 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt
	I1026 09:25:16.193340  494585 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key
	I1026 09:25:16.193401  494585 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key
	I1026 09:25:16.193418  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt with IP's: []
	I1026 09:25:16.564955  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt ...
	I1026 09:25:16.564990  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt: {Name:mk32684f507f93732e8872d382048a6bbea08380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.565181  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key ...
	I1026 09:25:16.565196  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key: {Name:mk4fc4dba1e8eaa66e6132a56af01ac57ed2b7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.565394  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:25:16.565443  494585 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:25:16.565458  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:25:16.565485  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:25:16.565511  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:25:16.565547  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:25:16.565590  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:25:16.566164  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:25:16.587501  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:25:16.609730  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:25:16.628536  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:25:16.647341  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 09:25:16.664796  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:25:16.683553  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:25:16.701723  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:25:16.720167  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:25:16.737146  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:25:16.754638  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:25:16.772175  494585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:25:16.785304  494585 ssh_runner.go:195] Run: openssl version
	I1026 09:25:16.791883  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:25:16.801921  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.805835  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.805949  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.849467  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:25:16.857865  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:25:16.866103  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.869780  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.869845  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.911859  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:25:16.920364  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:25:16.928883  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.932442  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.932553  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.973627  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:25:16.982801  494585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:25:16.986658  494585 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:25:16.986740  494585 kubeadm.go:400] StartCluster: {Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:25:16.986819  494585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:25:16.986892  494585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:25:17.025530  494585 cri.go:89] found id: ""
	I1026 09:25:17.025656  494585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:25:17.038306  494585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:25:17.046937  494585 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:25:17.047074  494585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:25:17.060245  494585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:25:17.060323  494585 kubeadm.go:157] found existing configuration files:
	
	I1026 09:25:17.060412  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:25:17.075407  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:25:17.075524  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:25:17.084053  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:25:17.092622  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:25:17.092693  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:25:17.100274  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:25:17.109222  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:25:17.109334  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:25:17.117078  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:25:17.131162  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:25:17.131284  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:25:17.139053  494585 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:25:17.183710  494585 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:25:17.183963  494585 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:25:17.209860  494585 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:25:17.210006  494585 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:25:17.210072  494585 kubeadm.go:318] OS: Linux
	I1026 09:25:17.210149  494585 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:25:17.210223  494585 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:25:17.210290  494585 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:25:17.210374  494585 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:25:17.210439  494585 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:25:17.210531  494585 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:25:17.210605  494585 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:25:17.210679  494585 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:25:17.210813  494585 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:25:17.283135  494585 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:25:17.283306  494585 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:25:17.283413  494585 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:25:17.291380  494585 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:25:17.297317  494585 out.go:252]   - Generating certificates and keys ...
	I1026 09:25:17.297488  494585 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:25:17.297595  494585 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:25:17.732846  494585 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:25:18.500135  494585 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:25:19.419296  494585 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:25:19.580689  494585 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:25:19.951757  494585 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:25:19.952143  494585 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-204381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:25:20.517845  494585 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:25:20.518226  494585 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-204381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:25:21.051493  494585 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:25:22.558651  494585 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:25:22.816918  494585 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:25:22.817222  494585 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:25:23.253592  494585 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:25:24.954543  494585 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:25:25.311096  494585 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:25:25.984763  494585 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:25:26.326918  494585 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:25:26.327510  494585 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:25:26.341110  494585 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:25:26.344496  494585 out.go:252]   - Booting up control plane ...
	I1026 09:25:26.344607  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:25:26.344689  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:25:26.344763  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:25:26.361237  494585 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:25:26.361355  494585 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:25:26.375735  494585 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:25:26.375849  494585 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:25:26.375895  494585 kubeadm.go:318] [kubelet-start] Starting the kubelet
	
	
	==> CRI-O <==
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.331375609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75685a67-d2c8-4cd7-b5be-9543e96f7158 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.33234349Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=958ead66-e790-4b8a-8ee6-dfcabac06758 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.332462745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337750493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337915058Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f4aa7d1cf346f0fe6910744b1e33893b07955e5f335a5a742d20ccb16a5c745d/merged/etc/passwd: no such file or directory"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337939509Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f4aa7d1cf346f0fe6910744b1e33893b07955e5f335a5a742d20ccb16a5c745d/merged/etc/group: no such file or directory"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.338182958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.377348625Z" level=info msg="Created container 1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead: kube-system/storage-provisioner/storage-provisioner" id=958ead66-e790-4b8a-8ee6-dfcabac06758 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.378541355Z" level=info msg="Starting container: 1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead" id=d0e4ee16-0787-4b95-a6dc-3f6ae9ce0ec2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.380501142Z" level=info msg="Started container" PID=1644 containerID=1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead description=kube-system/storage-provisioner/storage-provisioner id=d0e4ee16-0787-4b95-a6dc-3f6ae9ce0ec2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15231334c34eaa393a282d3dc80a42bbb4011ba0aba90eedc7f81dddc8f8c8c8
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.411292611Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420722928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420892374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420968437Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.424943299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.425117405Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.425200639Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.433837896Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.4340159Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.434101005Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.45080879Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.450982921Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.451073483Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.454905606Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.45507273Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1562913041a9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   15231334c34ea       storage-provisioner                                    kube-system
	968dbcb672d65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   b96a8983c226c       dashboard-metrics-scraper-6ffb444bf9-wwbp9             kubernetes-dashboard
	6c9d9c9cb3912       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   824cdd6134009       kubernetes-dashboard-855c9754f9-jkxkb                  kubernetes-dashboard
	9a9388f4f5ac4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   36a75c5e78f37       kindnet-7kfgn                                          kube-system
	612ef723d31dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   15231334c34ea       storage-provisioner                                    kube-system
	7c08d1e2f5902       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   59792692031d3       busybox                                                default
	5e27883c19db0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   9806bdb3c28f4       coredns-66bc5c9577-szwxb                               kube-system
	2fc2cbd7f301a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   c9cc9a1d7bf8b       kube-proxy-kzrr9                                       kube-system
	003b044f1b413       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   76359c38979e7       kube-scheduler-default-k8s-diff-port-289159            kube-system
	4b362316d3756       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   bf40cb03e2ad4       kube-controller-manager-default-k8s-diff-port-289159   kube-system
	958b42e7b2a41       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   b69cdbb0aea82       kube-apiserver-default-k8s-diff-port-289159            kube-system
	97f6719cfd228       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   07c5826857e97       etcd-default-k8s-diff-port-289159                      kube-system
	
	
	==> coredns [5e27883c19db0827c49f4c2614c23bd2fe0b2b8872d0aa74eadd85b5d5df8d20] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51171 - 16692 "HINFO IN 5812205453849933790.4728531947321053846. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01409939s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-289159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-289159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-289159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_23_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:23:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-289159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:23:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-289159
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1e8e5c9f-87b4-4325-9486-aebc60fc37f2
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-szwxb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-289159                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-7kfgn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-289159             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-289159    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-kzrr9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-289159             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wwbp9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jkxkb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-289159 event: Registered Node default-k8s-diff-port-289159 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-289159 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node default-k8s-diff-port-289159 event: Registered Node default-k8s-diff-port-289159 in Controller
	
	
	==> dmesg <==
	[ +34.748379] overlayfs: idmapped layers are currently not supported
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6] <==
	{"level":"warn","ts":"2025-10-26T09:24:35.257692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.279712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.298621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.366446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.373631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.386463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.403828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.430540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.447164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.465886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.519651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.528835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.543437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.570461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.601206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.611255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.635766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.647535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.673035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.692509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.713322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.731974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.758322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.778520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.874932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:25:27 up  3:07,  0 user,  load average: 4.40, 3.31, 2.82
	Linux default-k8s-diff-port-289159 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9a9388f4f5ac4949a732221eb509d21231678fe4155231bd49a140d1e9fae63d] <==
	I1026 09:24:38.276272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:24:38.276542       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:24:38.276673       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:24:38.276685       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:24:38.276695       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:24:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:24:38.413377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:24:38.413404       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:24:38.413414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:24:38.413541       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:25:08.407500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:25:08.411007       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:25:08.414492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:25:08.414579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 09:25:10.013544       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:25:10.013581       1 metrics.go:72] Registering metrics
	I1026 09:25:10.013638       1 controller.go:711] "Syncing nftables rules"
	I1026 09:25:18.410786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:25:18.410965       1 main.go:301] handling current node
	
	
	==> kube-apiserver [958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e] <==
	I1026 09:24:36.848136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:24:36.877585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:24:36.916403       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:24:36.916477       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:24:36.985599       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:24:37.014669       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 09:24:37.014703       1 policy_source.go:240] refreshing policies
	I1026 09:24:37.062181       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:24:37.062209       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:24:37.062582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 09:24:37.062720       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:24:37.069321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:24:37.110201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1026 09:24:37.151726       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:24:37.181997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:24:37.694785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:24:38.446947       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:24:38.541353       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:24:38.604909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:24:38.622699       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:24:38.717809       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.170.35"}
	I1026 09:24:38.779060       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.10.172"}
	I1026 09:24:40.508677       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:24:40.756267       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:24:40.804471       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75] <==
	I1026 09:24:40.399686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:24:40.399716       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:24:40.399760       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:24:40.399788       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:24:40.399941       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:24:40.406197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:24:40.406850       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:24:40.406953       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:24:40.407301       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:24:40.407355       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:24:40.407411       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:24:40.407474       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-289159"
	I1026 09:24:40.407510       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:24:40.410087       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:24:40.411068       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 09:24:40.411732       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:24:40.416912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:24:40.420456       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:24:40.430177       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:24:40.442798       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:24:40.444286       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:24:40.448458       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:24:40.465253       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:24:40.465280       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:24:40.465288       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2fc2cbd7f301a63755c482b1b3dd4679382cfa7037c64f021dba12297c96e575] <==
	I1026 09:24:38.422157       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:24:38.570074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:24:38.671114       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:24:38.671165       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:24:38.671264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:24:38.834889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:24:38.834941       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:24:38.859796       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:24:38.868211       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:24:38.868245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:38.879069       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:24:38.879096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:24:38.879472       1 config.go:200] "Starting service config controller"
	I1026 09:24:38.879480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:24:38.879823       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:24:38.879830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:24:38.880463       1 config.go:309] "Starting node config controller"
	I1026 09:24:38.880471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:24:38.880477       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:24:38.979494       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:24:38.979565       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:24:38.979863       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826] <==
	I1026 09:24:36.322428       1 serving.go:386] Generated self-signed cert in-memory
	I1026 09:24:38.070261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:24:38.070381       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:38.133453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:24:38.135410       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 09:24:38.135437       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 09:24:38.135471       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:24:38.136185       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:38.136210       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:38.136227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.136235       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.236803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.237173       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 09:24:38.237307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:24:40 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:40.496690     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070118     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f51b3f5f-7944-4fbc-8663-fc9647be0c2f-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jkxkb\" (UID: \"f51b3f5f-7944-4fbc-8663-fc9647be0c2f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070177     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7bz\" (UniqueName: \"kubernetes.io/projected/f51b3f5f-7944-4fbc-8663-fc9647be0c2f-kube-api-access-gm7bz\") pod \"kubernetes-dashboard-855c9754f9-jkxkb\" (UID: \"f51b3f5f-7944-4fbc-8663-fc9647be0c2f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070202     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/252c14ee-03a1-416b-a142-17899e436c18-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wwbp9\" (UID: \"252c14ee-03a1-416b-a142-17899e436c18\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070226     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgz28\" (UniqueName: \"kubernetes.io/projected/252c14ee-03a1-416b-a142-17899e436c18-kube-api-access-mgz28\") pod \"dashboard-metrics-scraper-6ffb444bf9-wwbp9\" (UID: \"252c14ee-03a1-416b-a142-17899e436c18\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: W1026 09:24:41.313794     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d WatchSource:0}: Error finding container 824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d: Status 404 returned error can't find the container with id 824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d
	Oct 26 09:24:46 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:46.206284     781 scope.go:117] "RemoveContainer" containerID="3b74811ed7981d45b388d2f84dae88dfd4d6e78b8071188af2813c81da2beec2"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:47.230811     781 scope.go:117] "RemoveContainer" containerID="3b74811ed7981d45b388d2f84dae88dfd4d6e78b8071188af2813c81da2beec2"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:47.230991     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:47.231761     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:24:48 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:48.238290     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:48 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:48.239503     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:24:51 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:51.265156     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:51 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:51.268833     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.060660     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.317373     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.317665     781 scope.go:117] "RemoveContainer" containerID="968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: E1026 09:25:06.317819     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.375553     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb" podStartSLOduration=17.339721728 podStartE2EDuration="26.375536358s" podCreationTimestamp="2025-10-26 09:24:40 +0000 UTC" firstStartedPulling="2025-10-26 09:24:41.318438648 +0000 UTC m=+12.493453502" lastFinishedPulling="2025-10-26 09:24:50.354253269 +0000 UTC m=+21.529268132" observedRunningTime="2025-10-26 09:24:51.269949208 +0000 UTC m=+22.444964063" watchObservedRunningTime="2025-10-26 09:25:06.375536358 +0000 UTC m=+37.550551221"
	Oct 26 09:25:09 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:09.329483     781 scope.go:117] "RemoveContainer" containerID="612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f"
	Oct 26 09:25:11 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:11.265223     781 scope.go:117] "RemoveContainer" containerID="968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	Oct 26 09:25:11 default-k8s-diff-port-289159 kubelet[781]: E1026 09:25:11.265405     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c9d9c9cb391226f6310c9075eb7e3d3395c852ecd5bae121b4a476b9ec84c4a] <==
	2025/10/26 09:24:50 Starting overwatch
	2025/10/26 09:24:50 Using namespace: kubernetes-dashboard
	2025/10/26 09:24:50 Using in-cluster config to connect to apiserver
	2025/10/26 09:24:50 Using secret token for csrf signing
	2025/10/26 09:24:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:24:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:24:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:24:50 Generating JWE encryption key
	2025/10/26 09:24:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:24:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:24:51 Initializing JWE encryption key from synchronized object
	2025/10/26 09:24:51 Creating in-cluster Sidecar client
	2025/10/26 09:24:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:24:51 Serving insecurely on HTTP port: 9090
	2025/10/26 09:25:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead] <==
	I1026 09:25:09.410593       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:25:09.426629       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:25:09.426961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:25:09.431694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:12.887432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:17.148342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:20.746813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:23.800386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.822696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.828172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:25:26.828322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:25:26.828505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221!
	I1026 09:25:26.829378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"491e1afa-6b15-4fa8-8df8-cf9dae75b323", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221 became leader
	W1026 09:25:26.851008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.855000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:25:26.929402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221!
	
	
	==> storage-provisioner [612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f] <==
	I1026 09:24:38.364115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:25:08.366151       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159: exit status 2 (589.380099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-289159
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-289159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	        "Created": "2025-10-26T09:22:35.695576526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:24:21.11130871Z",
	            "FinishedAt": "2025-10-26T09:24:20.116648194Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/hosts",
	        "LogPath": "/var/lib/docker/containers/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67-json.log",
	        "Name": "/default-k8s-diff-port-289159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-289159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-289159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67",
	                "LowerDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e2bfcf62b6661d66254b6e23b846830b388429af2a7c2b46e590e668a49c27/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-289159",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-289159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-289159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-289159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b2e728367a8d3a532ebbd9d8fd74bdb98b5a0fddabfec5967aa20949a741d0b",
	            "SandboxKey": "/var/run/docker/netns/2b2e728367a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-289159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:af:b0:0b:9e:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788f8e4ab8525806628d59d0a963ab3ec20463b77ce93fefea997bd8290d71c3",
	                    "EndpointID": "5aa6a9236d364a14545bef7a1ef39022d1875a45aa0a8eaae288287fa95e9cc7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-289159",
	                        "e75dab2714ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159: exit status 2 (509.678136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-289159 logs -n 25: (1.970239646s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:18 UTC │ 26 Oct 25 09:19 UTC │
	│ delete  │ -p kubernetes-upgrade-275732                                                                                                                                                                                                                  │ kubernetes-upgrade-275732    │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:21 UTC │
	│ start   │ -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:21 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ cert-options-094384 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:25:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:25:01.546147  494585 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:25:01.546275  494585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:01.546286  494585 out.go:374] Setting ErrFile to fd 2...
	I1026 09:25:01.546292  494585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:01.546543  494585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:25:01.547043  494585 out.go:368] Setting JSON to false
	I1026 09:25:01.548165  494585 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11252,"bootTime":1761459450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:25:01.548238  494585 start.go:141] virtualization:  
	I1026 09:25:01.552321  494585 out.go:179] * [embed-certs-204381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:25:01.556729  494585 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:25:01.556908  494585 notify.go:220] Checking for updates...
	I1026 09:25:01.566798  494585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:25:01.570149  494585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:25:01.573277  494585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:25:01.577089  494585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:25:01.580261  494585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:25:01.584011  494585 config.go:182] Loaded profile config "default-k8s-diff-port-289159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:01.584155  494585 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:25:01.627085  494585 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:25:01.627214  494585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:01.689238  494585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:01.677953331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:01.689356  494585 docker.go:318] overlay module found
	I1026 09:25:01.692601  494585 out.go:179] * Using the docker driver based on user configuration
	I1026 09:25:01.695615  494585 start.go:305] selected driver: docker
	I1026 09:25:01.695645  494585 start.go:925] validating driver "docker" against <nil>
	I1026 09:25:01.695660  494585 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:25:01.696506  494585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:01.756507  494585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:01.747018296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:01.756658  494585 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:25:01.756906  494585 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:25:01.759829  494585 out.go:179] * Using Docker driver with root privileges
	I1026 09:25:01.762906  494585 cni.go:84] Creating CNI manager for ""
	I1026 09:25:01.762988  494585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:01.763002  494585 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:25:01.763094  494585 start.go:349] cluster config:
	{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:25:01.766473  494585 out.go:179] * Starting "embed-certs-204381" primary control-plane node in "embed-certs-204381" cluster
	I1026 09:25:01.769377  494585 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:25:01.772386  494585 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:25:01.775338  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:01.775427  494585 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:25:01.775441  494585 cache.go:58] Caching tarball of preloaded images
	I1026 09:25:01.775447  494585 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:25:01.775590  494585 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:25:01.775603  494585 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:25:01.775735  494585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json ...
	I1026 09:25:01.775781  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json: {Name:mk7979a4ff906b2642aec86dd01313a076c79266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:01.797224  494585 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:25:01.797248  494585 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:25:01.797270  494585 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:25:01.797296  494585 start.go:360] acquireMachinesLock for embed-certs-204381: {Name:mkd161c65630ff13edac2ff621a7dae8e5ffecd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:01.797413  494585 start.go:364] duration metric: took 100.883µs to acquireMachinesLock for "embed-certs-204381"
	I1026 09:25:01.797440  494585 start.go:93] Provisioning new machine with config: &{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:25:01.797509  494585 start.go:125] createHost starting for "" (driver="docker")
	W1026 09:25:02.600834  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:25:05.100713  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	I1026 09:25:01.801039  494585 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:25:01.801305  494585 start.go:159] libmachine.API.Create for "embed-certs-204381" (driver="docker")
	I1026 09:25:01.801357  494585 client.go:168] LocalClient.Create starting
	I1026 09:25:01.801449  494585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:25:01.801491  494585 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:01.801508  494585 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:01.801562  494585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:25:01.801586  494585 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:01.801600  494585 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:01.801992  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:25:01.819246  494585 cli_runner.go:211] docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:25:01.819339  494585 network_create.go:284] running [docker network inspect embed-certs-204381] to gather additional debugging logs...
	I1026 09:25:01.819362  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381
	W1026 09:25:01.836756  494585 cli_runner.go:211] docker network inspect embed-certs-204381 returned with exit code 1
	I1026 09:25:01.836792  494585 network_create.go:287] error running [docker network inspect embed-certs-204381]: docker network inspect embed-certs-204381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-204381 not found
	I1026 09:25:01.836812  494585 network_create.go:289] output of [docker network inspect embed-certs-204381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-204381 not found
	
	** /stderr **
	I1026 09:25:01.836932  494585 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:01.855318  494585 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:25:01.855692  494585 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:25:01.855953  494585 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:25:01.856392  494585 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cdc20}
	I1026 09:25:01.856415  494585 network_create.go:124] attempt to create docker network embed-certs-204381 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 09:25:01.856470  494585 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-204381 embed-certs-204381
	I1026 09:25:01.922566  494585 network_create.go:108] docker network embed-certs-204381 192.168.76.0/24 created
	I1026 09:25:01.922602  494585 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-204381" container
	I1026 09:25:01.922704  494585 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:25:01.939146  494585 cli_runner.go:164] Run: docker volume create embed-certs-204381 --label name.minikube.sigs.k8s.io=embed-certs-204381 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:25:01.958698  494585 oci.go:103] Successfully created a docker volume embed-certs-204381
	I1026 09:25:01.958822  494585 cli_runner.go:164] Run: docker run --rm --name embed-certs-204381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-204381 --entrypoint /usr/bin/test -v embed-certs-204381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:25:02.552687  494585 oci.go:107] Successfully prepared a docker volume embed-certs-204381
	I1026 09:25:02.552739  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:02.552759  494585 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:25:02.552844  494585 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-204381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 09:25:07.598630  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	W1026 09:25:09.599931  490787 pod_ready.go:104] pod "coredns-66bc5c9577-szwxb" is not "Ready", error: <nil>
	I1026 09:25:10.598166  490787 pod_ready.go:94] pod "coredns-66bc5c9577-szwxb" is "Ready"
	I1026 09:25:10.598195  490787 pod_ready.go:86] duration metric: took 31.505329014s for pod "coredns-66bc5c9577-szwxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.601269  490787 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.606448  490787 pod_ready.go:94] pod "etcd-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.606476  490787 pod_ready.go:86] duration metric: took 5.128616ms for pod "etcd-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.609028  490787 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.614599  490787 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.614632  490787 pod_ready.go:86] duration metric: took 5.530262ms for pod "kube-apiserver-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.616954  490787 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:06.945139  494585 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-204381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.392251432s)
	I1026 09:25:06.945187  494585 kic.go:203] duration metric: took 4.392424473s to extract preloaded images to volume ...
	W1026 09:25:06.945319  494585 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:25:06.945440  494585 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:25:07.020669  494585 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-204381 --name embed-certs-204381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-204381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-204381 --network embed-certs-204381 --ip 192.168.76.2 --volume embed-certs-204381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:25:07.348129  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Running}}
	I1026 09:25:07.367547  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:07.390128  494585 cli_runner.go:164] Run: docker exec embed-certs-204381 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:25:07.444996  494585 oci.go:144] the created container "embed-certs-204381" has a running status.
	I1026 09:25:07.445035  494585 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa...
	I1026 09:25:08.227304  494585 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:25:08.248278  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:08.265129  494585 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:25:08.265154  494585 kic_runner.go:114] Args: [docker exec --privileged embed-certs-204381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:25:08.307342  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:08.327696  494585 machine.go:93] provisionDockerMachine start ...
	I1026 09:25:08.327825  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:08.346701  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:08.347203  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:08.347221  494585 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:25:08.347901  494585 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60422->127.0.0.1:33440: read: connection reset by peer
	I1026 09:25:11.498531  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-204381
	
	I1026 09:25:11.498595  494585 ubuntu.go:182] provisioning hostname "embed-certs-204381"
	I1026 09:25:11.498668  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:11.516316  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:11.516633  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:11.516652  494585 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-204381 && echo "embed-certs-204381" | sudo tee /etc/hostname
	I1026 09:25:10.796677  490787 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:10.796710  490787 pod_ready.go:86] duration metric: took 179.724243ms for pod "kube-controller-manager-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:10.996789  490787 pod_ready.go:83] waiting for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.395964  490787 pod_ready.go:94] pod "kube-proxy-kzrr9" is "Ready"
	I1026 09:25:11.396047  490787 pod_ready.go:86] duration metric: took 399.222849ms for pod "kube-proxy-kzrr9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.596928  490787 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.996845  490787 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-289159" is "Ready"
	I1026 09:25:11.996878  490787 pod_ready.go:86] duration metric: took 399.92534ms for pod "kube-scheduler-default-k8s-diff-port-289159" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:25:11.996892  490787 pod_ready.go:40] duration metric: took 32.913347782s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:25:12.088215  490787 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:25:12.091674  490787 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-289159" cluster and "default" namespace by default
	I1026 09:25:11.684937  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-204381
	
	I1026 09:25:11.685019  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:11.703085  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:11.703406  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:11.703432  494585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-204381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-204381/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-204381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:25:11.854998  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:25:11.855022  494585 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:25:11.855046  494585 ubuntu.go:190] setting up certificates
	I1026 09:25:11.855057  494585 provision.go:84] configureAuth start
	I1026 09:25:11.855123  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:11.872807  494585 provision.go:143] copyHostCerts
	I1026 09:25:11.872874  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:25:11.872884  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:25:11.872967  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:25:11.873073  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:25:11.873078  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:25:11.873104  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:25:11.873152  494585 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:25:11.873158  494585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:25:11.873182  494585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:25:11.873226  494585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.embed-certs-204381 san=[127.0.0.1 192.168.76.2 embed-certs-204381 localhost minikube]
	I1026 09:25:12.784639  494585 provision.go:177] copyRemoteCerts
	I1026 09:25:12.784714  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:25:12.784763  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:12.802482  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:12.906470  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:25:12.924125  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 09:25:12.942876  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:25:12.962191  494585 provision.go:87] duration metric: took 1.107120342s to configureAuth
	I1026 09:25:12.962219  494585 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:25:12.962407  494585 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:12.962524  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:12.982188  494585 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:12.982495  494585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1026 09:25:12.982509  494585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:25:13.331752  494585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:25:13.331818  494585 machine.go:96] duration metric: took 5.004096726s to provisionDockerMachine
	I1026 09:25:13.331842  494585 client.go:171] duration metric: took 11.530474783s to LocalClient.Create
	I1026 09:25:13.331896  494585 start.go:167] duration metric: took 11.530592003s to libmachine.API.Create "embed-certs-204381"
	I1026 09:25:13.331922  494585 start.go:293] postStartSetup for "embed-certs-204381" (driver="docker")
	I1026 09:25:13.331945  494585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:25:13.332060  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:25:13.332147  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.350589  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.458950  494585 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:25:13.462267  494585 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:25:13.462296  494585 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:25:13.462308  494585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:25:13.462362  494585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:25:13.462452  494585 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:25:13.462570  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:25:13.470190  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:25:13.489226  494585 start.go:296] duration metric: took 157.277142ms for postStartSetup
	I1026 09:25:13.489602  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:13.509349  494585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json ...
	I1026 09:25:13.509651  494585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:25:13.509693  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.528281  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.632280  494585 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:25:13.637306  494585 start.go:128] duration metric: took 11.839781918s to createHost
	I1026 09:25:13.637329  494585 start.go:83] releasing machines lock for "embed-certs-204381", held for 11.839906729s
	I1026 09:25:13.637406  494585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-204381
	I1026 09:25:13.654853  494585 ssh_runner.go:195] Run: cat /version.json
	I1026 09:25:13.654890  494585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:25:13.654905  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.654944  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:13.683685  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.696396  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:13.894305  494585 ssh_runner.go:195] Run: systemctl --version
	I1026 09:25:13.900936  494585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:25:13.940804  494585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:25:13.945304  494585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:25:13.945422  494585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:25:13.981871  494585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:25:13.981896  494585 start.go:495] detecting cgroup driver to use...
	I1026 09:25:13.981953  494585 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:25:13.982041  494585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:25:14.002598  494585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:25:14.017995  494585 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:25:14.018064  494585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:25:14.037517  494585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:25:14.057890  494585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:25:14.179435  494585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:25:14.308301  494585 docker.go:234] disabling docker service ...
	I1026 09:25:14.308366  494585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:25:14.330208  494585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:25:14.343352  494585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:25:14.458954  494585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:25:14.576769  494585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:25:14.590528  494585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:25:14.612884  494585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:25:14.613012  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.623609  494585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:25:14.623727  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.632843  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.641454  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.650635  494585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:25:14.659150  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.667801  494585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.681422  494585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:14.690783  494585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:25:14.699258  494585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:25:14.707169  494585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:14.814161  494585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:25:14.947634  494585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:25:14.947735  494585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:25:14.952392  494585 start.go:563] Will wait 60s for crictl version
	I1026 09:25:14.952514  494585 ssh_runner.go:195] Run: which crictl
	I1026 09:25:14.956891  494585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:25:14.985960  494585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:25:14.986105  494585 ssh_runner.go:195] Run: crio --version
	I1026 09:25:15.037061  494585 ssh_runner.go:195] Run: crio --version
	I1026 09:25:15.079676  494585 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:25:15.082420  494585 cli_runner.go:164] Run: docker network inspect embed-certs-204381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:15.101459  494585 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:25:15.106355  494585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:25:15.118203  494585 kubeadm.go:883] updating cluster {Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:25:15.118324  494585 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:15.118393  494585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:25:15.165127  494585 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:25:15.165151  494585 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:25:15.165246  494585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:25:15.192476  494585 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:25:15.192501  494585 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:25:15.192511  494585 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:25:15.192616  494585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-204381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:25:15.192714  494585 ssh_runner.go:195] Run: crio config
	I1026 09:25:15.264960  494585 cni.go:84] Creating CNI manager for ""
	I1026 09:25:15.265030  494585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:15.265056  494585 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:25:15.265087  494585 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-204381 NodeName:embed-certs-204381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:25:15.265222  494585 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-204381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:25:15.265333  494585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:25:15.273736  494585 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:25:15.273804  494585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:25:15.282000  494585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 09:25:15.295722  494585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:25:15.309756  494585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 09:25:15.323035  494585 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:25:15.326620  494585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:25:15.337757  494585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:15.466310  494585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:25:15.485872  494585 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381 for IP: 192.168.76.2
	I1026 09:25:15.485895  494585 certs.go:195] generating shared ca certs ...
	I1026 09:25:15.485913  494585 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.486151  494585 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:25:15.486239  494585 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:25:15.486265  494585 certs.go:257] generating profile certs ...
	I1026 09:25:15.486343  494585 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key
	I1026 09:25:15.486362  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt with IP's: []
	I1026 09:25:15.550543  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt ...
	I1026 09:25:15.550576  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.crt: {Name:mkb3891aa55996d28e4efd8d81da5448a7f48836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.550831  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key ...
	I1026 09:25:15.550846  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/client.key: {Name:mk0df235d777b934de26a0721f66018190c9e01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:15.550953  494585 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a
	I1026 09:25:15.550969  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 09:25:16.192875  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a ...
	I1026 09:25:16.192913  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a: {Name:mkcdbaee7abefa7de07b79b0a507892fbc8b542b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.193145  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a ...
	I1026 09:25:16.193163  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a: {Name:mk3c51555d42ba4158a227514a9a9f0944d361d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.193254  494585 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt.e145061a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt
	I1026 09:25:16.193340  494585 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key.e145061a -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key
	I1026 09:25:16.193401  494585 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key
	I1026 09:25:16.193418  494585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt with IP's: []
	I1026 09:25:16.564955  494585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt ...
	I1026 09:25:16.564990  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt: {Name:mk32684f507f93732e8872d382048a6bbea08380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.565181  494585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key ...
	I1026 09:25:16.565196  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key: {Name:mk4fc4dba1e8eaa66e6132a56af01ac57ed2b7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:16.565394  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:25:16.565443  494585 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:25:16.565458  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:25:16.565485  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:25:16.565511  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:25:16.565547  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:25:16.565590  494585 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:25:16.566164  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:25:16.587501  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:25:16.609730  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:25:16.628536  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:25:16.647341  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 09:25:16.664796  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 09:25:16.683553  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:25:16.701723  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:25:16.720167  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:25:16.737146  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:25:16.754638  494585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:25:16.772175  494585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:25:16.785304  494585 ssh_runner.go:195] Run: openssl version
	I1026 09:25:16.791883  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:25:16.801921  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.805835  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.805949  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:25:16.849467  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:25:16.857865  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:25:16.866103  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.869780  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.869845  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:25:16.911859  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:25:16.920364  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:25:16.928883  494585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.932442  494585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.932553  494585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:25:16.973627  494585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:25:16.982801  494585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:25:16.986658  494585 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:25:16.986740  494585 kubeadm.go:400] StartCluster: {Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:25:16.986819  494585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:25:16.986892  494585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:25:17.025530  494585 cri.go:89] found id: ""
	I1026 09:25:17.025656  494585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:25:17.038306  494585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:25:17.046937  494585 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:25:17.047074  494585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:25:17.060245  494585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:25:17.060323  494585 kubeadm.go:157] found existing configuration files:
	
	I1026 09:25:17.060412  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:25:17.075407  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:25:17.075524  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:25:17.084053  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:25:17.092622  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:25:17.092693  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:25:17.100274  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:25:17.109222  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:25:17.109334  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:25:17.117078  494585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:25:17.131162  494585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:25:17.131284  494585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:25:17.139053  494585 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:25:17.183710  494585 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:25:17.183963  494585 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:25:17.209860  494585 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:25:17.210006  494585 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:25:17.210072  494585 kubeadm.go:318] OS: Linux
	I1026 09:25:17.210149  494585 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:25:17.210223  494585 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:25:17.210290  494585 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:25:17.210374  494585 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:25:17.210439  494585 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:25:17.210531  494585 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:25:17.210605  494585 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:25:17.210679  494585 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:25:17.210813  494585 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:25:17.283135  494585 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:25:17.283306  494585 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:25:17.283413  494585 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:25:17.291380  494585 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:25:17.297317  494585 out.go:252]   - Generating certificates and keys ...
	I1026 09:25:17.297488  494585 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:25:17.297595  494585 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:25:17.732846  494585 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:25:18.500135  494585 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:25:19.419296  494585 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:25:19.580689  494585 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:25:19.951757  494585 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:25:19.952143  494585 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-204381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:25:20.517845  494585 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:25:20.518226  494585 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-204381 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 09:25:21.051493  494585 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:25:22.558651  494585 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:25:22.816918  494585 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:25:22.817222  494585 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:25:23.253592  494585 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:25:24.954543  494585 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:25:25.311096  494585 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:25:25.984763  494585 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:25:26.326918  494585 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:25:26.327510  494585 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:25:26.341110  494585 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:25:26.344496  494585 out.go:252]   - Booting up control plane ...
	I1026 09:25:26.344607  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:25:26.344689  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:25:26.344763  494585 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:25:26.361237  494585 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:25:26.361355  494585 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:25:26.375735  494585 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:25:26.375849  494585 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:25:26.375895  494585 kubeadm.go:318] [kubelet-start] Starting the kubelet
	
	
	==> CRI-O <==
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.331375609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75685a67-d2c8-4cd7-b5be-9543e96f7158 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.33234349Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=958ead66-e790-4b8a-8ee6-dfcabac06758 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.332462745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337750493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337915058Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f4aa7d1cf346f0fe6910744b1e33893b07955e5f335a5a742d20ccb16a5c745d/merged/etc/passwd: no such file or directory"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.337939509Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f4aa7d1cf346f0fe6910744b1e33893b07955e5f335a5a742d20ccb16a5c745d/merged/etc/group: no such file or directory"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.338182958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.377348625Z" level=info msg="Created container 1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead: kube-system/storage-provisioner/storage-provisioner" id=958ead66-e790-4b8a-8ee6-dfcabac06758 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.378541355Z" level=info msg="Starting container: 1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead" id=d0e4ee16-0787-4b95-a6dc-3f6ae9ce0ec2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:25:09 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:09.380501142Z" level=info msg="Started container" PID=1644 containerID=1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead description=kube-system/storage-provisioner/storage-provisioner id=d0e4ee16-0787-4b95-a6dc-3f6ae9ce0ec2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15231334c34eaa393a282d3dc80a42bbb4011ba0aba90eedc7f81dddc8f8c8c8
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.411292611Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420722928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420892374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.420968437Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.424943299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.425117405Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.425200639Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.433837896Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.4340159Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.434101005Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.45080879Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.450982921Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.451073483Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.454905606Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:25:18 default-k8s-diff-port-289159 crio[653]: time="2025-10-26T09:25:18.45507273Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1562913041a9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   15231334c34ea       storage-provisioner                                    kube-system
	968dbcb672d65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   b96a8983c226c       dashboard-metrics-scraper-6ffb444bf9-wwbp9             kubernetes-dashboard
	6c9d9c9cb3912       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   824cdd6134009       kubernetes-dashboard-855c9754f9-jkxkb                  kubernetes-dashboard
	9a9388f4f5ac4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   36a75c5e78f37       kindnet-7kfgn                                          kube-system
	612ef723d31dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   15231334c34ea       storage-provisioner                                    kube-system
	7c08d1e2f5902       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   59792692031d3       busybox                                                default
	5e27883c19db0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   9806bdb3c28f4       coredns-66bc5c9577-szwxb                               kube-system
	2fc2cbd7f301a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   c9cc9a1d7bf8b       kube-proxy-kzrr9                                       kube-system
	003b044f1b413       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   76359c38979e7       kube-scheduler-default-k8s-diff-port-289159            kube-system
	4b362316d3756       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bf40cb03e2ad4       kube-controller-manager-default-k8s-diff-port-289159   kube-system
	958b42e7b2a41       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b69cdbb0aea82       kube-apiserver-default-k8s-diff-port-289159            kube-system
	97f6719cfd228       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   07c5826857e97       etcd-default-k8s-diff-port-289159                      kube-system
	
	
	==> coredns [5e27883c19db0827c49f4c2614c23bd2fe0b2b8872d0aa74eadd85b5d5df8d20] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51171 - 16692 "HINFO IN 5812205453849933790.4728531947321053846. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01409939s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-289159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-289159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-289159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_23_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:23:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-289159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:25:08 +0000   Sun, 26 Oct 2025 09:23:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-289159
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1e8e5c9f-87b4-4325-9486-aebc60fc37f2
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-szwxb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-289159                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-7kfgn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-289159             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-289159    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-kzrr9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-289159             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wwbp9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jkxkb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m22s                  node-controller  Node default-k8s-diff-port-289159 event: Registered Node default-k8s-diff-port-289159 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-289159 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-289159 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-289159 event: Registered Node default-k8s-diff-port-289159 in Controller
	
	
	==> dmesg <==
	[Oct26 09:00] overlayfs: idmapped layers are currently not supported
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97f6719cfd228f8b60cdd96ea59eca8384e01fbb78c019af24986d7fe76937b6] <==
	{"level":"warn","ts":"2025-10-26T09:24:35.257692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.279712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.298621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.366446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.373631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.386463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.403828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.430540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.447164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.465886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.519651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.528835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.543437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.570461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.601206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.611255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.635766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.647535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.673035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.692509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.713322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.731974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.758322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.778520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:24:35.874932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:25:31 up  3:08,  0 user,  load average: 4.93, 3.44, 2.86
	Linux default-k8s-diff-port-289159 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9a9388f4f5ac4949a732221eb509d21231678fe4155231bd49a140d1e9fae63d] <==
	I1026 09:24:38.276272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:24:38.276542       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:24:38.276673       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:24:38.276685       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:24:38.276695       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:24:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:24:38.413377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:24:38.413404       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:24:38.413414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:24:38.413541       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:25:08.407500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:25:08.411007       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:25:08.414492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:25:08.414579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 09:25:10.013544       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:25:10.013581       1 metrics.go:72] Registering metrics
	I1026 09:25:10.013638       1 controller.go:711] "Syncing nftables rules"
	I1026 09:25:18.410786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:25:18.410965       1 main.go:301] handling current node
	I1026 09:25:28.415127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:25:28.415159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [958b42e7b2a418f79327f04920bedbe4a907dad6d46afb08d2e49b5828ca0f1e] <==
	I1026 09:24:36.848136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:24:36.877585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:24:36.916403       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:24:36.916477       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:24:36.985599       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:24:37.014669       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 09:24:37.014703       1 policy_source.go:240] refreshing policies
	I1026 09:24:37.062181       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:24:37.062209       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:24:37.062582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 09:24:37.062720       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:24:37.069321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:24:37.110201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1026 09:24:37.151726       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:24:37.181997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:24:37.694785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:24:38.446947       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:24:38.541353       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:24:38.604909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:24:38.622699       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:24:38.717809       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.170.35"}
	I1026 09:24:38.779060       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.10.172"}
	I1026 09:24:40.508677       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:24:40.756267       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:24:40.804471       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4b362316d375694bc2e107043288e01a543767397bcd510769d3c29576432e75] <==
	I1026 09:24:40.399686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:24:40.399716       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:24:40.399760       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:24:40.399788       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:24:40.399941       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:24:40.406197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:24:40.406850       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:24:40.406953       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:24:40.407301       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:24:40.407355       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:24:40.407411       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:24:40.407474       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-289159"
	I1026 09:24:40.407510       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:24:40.410087       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:24:40.411068       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 09:24:40.411732       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:24:40.416912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:24:40.420456       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:24:40.430177       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:24:40.442798       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:24:40.444286       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:24:40.448458       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:24:40.465253       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:24:40.465280       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:24:40.465288       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2fc2cbd7f301a63755c482b1b3dd4679382cfa7037c64f021dba12297c96e575] <==
	I1026 09:24:38.422157       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:24:38.570074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:24:38.671114       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:24:38.671165       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:24:38.671264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:24:38.834889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:24:38.834941       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:24:38.859796       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:24:38.868211       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:24:38.868245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:38.879069       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:24:38.879096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:24:38.879472       1 config.go:200] "Starting service config controller"
	I1026 09:24:38.879480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:24:38.879823       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:24:38.879830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:24:38.880463       1 config.go:309] "Starting node config controller"
	I1026 09:24:38.880471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:24:38.880477       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:24:38.979494       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:24:38.979565       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:24:38.979863       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [003b044f1b413cbf8963dc0b448b602dbc034401f8fc4088aff26ee92a946826] <==
	I1026 09:24:36.322428       1 serving.go:386] Generated self-signed cert in-memory
	I1026 09:24:38.070261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:24:38.070381       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:24:38.133453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:24:38.135410       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 09:24:38.135437       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 09:24:38.135471       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:24:38.136185       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:38.136210       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:24:38.136227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.136235       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.236803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:24:38.237173       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 09:24:38.237307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:24:40 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:40.496690     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070118     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f51b3f5f-7944-4fbc-8663-fc9647be0c2f-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jkxkb\" (UID: \"f51b3f5f-7944-4fbc-8663-fc9647be0c2f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070177     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7bz\" (UniqueName: \"kubernetes.io/projected/f51b3f5f-7944-4fbc-8663-fc9647be0c2f-kube-api-access-gm7bz\") pod \"kubernetes-dashboard-855c9754f9-jkxkb\" (UID: \"f51b3f5f-7944-4fbc-8663-fc9647be0c2f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070202     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/252c14ee-03a1-416b-a142-17899e436c18-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wwbp9\" (UID: \"252c14ee-03a1-416b-a142-17899e436c18\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:41.070226     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgz28\" (UniqueName: \"kubernetes.io/projected/252c14ee-03a1-416b-a142-17899e436c18-kube-api-access-mgz28\") pod \"dashboard-metrics-scraper-6ffb444bf9-wwbp9\" (UID: \"252c14ee-03a1-416b-a142-17899e436c18\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9"
	Oct 26 09:24:41 default-k8s-diff-port-289159 kubelet[781]: W1026 09:24:41.313794     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75dab2714ba57116889f43929f6241e51fd71df3af069de81eca8d2058f8d67/crio-824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d WatchSource:0}: Error finding container 824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d: Status 404 returned error can't find the container with id 824cdd6134009b1e174c15ace12681abd622b55e95988d3f1288e3dd248fea4d
	Oct 26 09:24:46 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:46.206284     781 scope.go:117] "RemoveContainer" containerID="3b74811ed7981d45b388d2f84dae88dfd4d6e78b8071188af2813c81da2beec2"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:47.230811     781 scope.go:117] "RemoveContainer" containerID="3b74811ed7981d45b388d2f84dae88dfd4d6e78b8071188af2813c81da2beec2"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:47.230991     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:47 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:47.231761     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:24:48 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:48.238290     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:48 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:48.239503     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:24:51 default-k8s-diff-port-289159 kubelet[781]: I1026 09:24:51.265156     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:24:51 default-k8s-diff-port-289159 kubelet[781]: E1026 09:24:51.268833     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.060660     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.317373     781 scope.go:117] "RemoveContainer" containerID="1ba2a3d4ae0db01a3ecb8c9f9bb0a3d7378cbaa3e153083275776e899a181302"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.317665     781 scope.go:117] "RemoveContainer" containerID="968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: E1026 09:25:06.317819     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:06 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:06.375553     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jkxkb" podStartSLOduration=17.339721728 podStartE2EDuration="26.375536358s" podCreationTimestamp="2025-10-26 09:24:40 +0000 UTC" firstStartedPulling="2025-10-26 09:24:41.318438648 +0000 UTC m=+12.493453502" lastFinishedPulling="2025-10-26 09:24:50.354253269 +0000 UTC m=+21.529268132" observedRunningTime="2025-10-26 09:24:51.269949208 +0000 UTC m=+22.444964063" watchObservedRunningTime="2025-10-26 09:25:06.375536358 +0000 UTC m=+37.550551221"
	Oct 26 09:25:09 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:09.329483     781 scope.go:117] "RemoveContainer" containerID="612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f"
	Oct 26 09:25:11 default-k8s-diff-port-289159 kubelet[781]: I1026 09:25:11.265223     781 scope.go:117] "RemoveContainer" containerID="968dbcb672d654c43b25e65b0d9c8b8ab829eeb8096ceddb7a3b52333dba66a7"
	Oct 26 09:25:11 default-k8s-diff-port-289159 kubelet[781]: E1026 09:25:11.265405     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wwbp9_kubernetes-dashboard(252c14ee-03a1-416b-a142-17899e436c18)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wwbp9" podUID="252c14ee-03a1-416b-a142-17899e436c18"
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:25:24 default-k8s-diff-port-289159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c9d9c9cb391226f6310c9075eb7e3d3395c852ecd5bae121b4a476b9ec84c4a] <==
	2025/10/26 09:24:50 Using namespace: kubernetes-dashboard
	2025/10/26 09:24:50 Using in-cluster config to connect to apiserver
	2025/10/26 09:24:50 Using secret token for csrf signing
	2025/10/26 09:24:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:24:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:24:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:24:50 Generating JWE encryption key
	2025/10/26 09:24:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:24:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:24:51 Initializing JWE encryption key from synchronized object
	2025/10/26 09:24:51 Creating in-cluster Sidecar client
	2025/10/26 09:24:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:24:51 Serving insecurely on HTTP port: 9090
	2025/10/26 09:25:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:24:50 Starting overwatch
	
	
	==> storage-provisioner [1562913041a9b955584eae418df51e6b938f8a46f23ec558e13410c28b317ead] <==
	I1026 09:25:09.410593       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:25:09.426629       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:25:09.426961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:25:09.431694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:12.887432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:17.148342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:20.746813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:23.800386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.822696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.828172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:25:26.828322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:25:26.828505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221!
	I1026 09:25:26.829378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"491e1afa-6b15-4fa8-8df8-cf9dae75b323", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221 became leader
	W1026 09:25:26.851008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:26.855000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:25:26.929402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-289159_f6b19864-b617-400d-aede-a724491d6221!
	W1026 09:25:28.858979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:28.865689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:30.886053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:25:30.899960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [612ef723d31dd943116be12cfd63550460a3a048a6a3f11973ec335e136a391f] <==
	I1026 09:24:38.364115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:25:08.366151       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159: exit status 2 (525.391851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.226236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:26:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-204381 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-204381 describe deploy/metrics-server -n kube-system: exit status 1 (107.838269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-204381 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-204381
helpers_test.go:243: (dbg) docker inspect embed-certs-204381:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	        "Created": "2025-10-26T09:25:07.035838779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:25:07.116749247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hosts",
	        "LogPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab-json.log",
	        "Name": "/embed-certs-204381",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-204381:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-204381",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	                "LowerDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-204381",
	                "Source": "/var/lib/docker/volumes/embed-certs-204381/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-204381",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-204381",
	                "name.minikube.sigs.k8s.io": "embed-certs-204381",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5442c9a3d958ab6bd99ec9ae3d9d716315c8024c22b7490ce8e7501d66ac5677",
	            "SandboxKey": "/var/run/docker/netns/5442c9a3d958",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-204381": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:d8:9f:27:30:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33c235a08e203b1c326fabab7473b4ca038ba835f19a85fcec21303edd44d5d4",
	                    "EndpointID": "21b43c05a781b8a002233c1ddfa70cfb3bc0ee4eeba3724e9f369ae1b879e765",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-204381",
	                        "fbf6b6fb12ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25: (1.294014586s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-094384 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ delete  │ -p cert-options-094384                                                                                                                                                                                                                        │ cert-options-094384          │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ delete  │ -p cert-expiration-375355                                                                                                                                                                                                                     │ cert-expiration-375355       │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                                                                                               │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:25:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:25:35.860544  498114 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:25:35.860667  498114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:35.860673  498114 out.go:374] Setting ErrFile to fd 2...
	I1026 09:25:35.860678  498114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:25:35.860936  498114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:25:35.861330  498114 out.go:368] Setting JSON to false
	I1026 09:25:35.862199  498114 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11286,"bootTime":1761459450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:25:35.862275  498114 start.go:141] virtualization:  
	I1026 09:25:35.865761  498114 out.go:179] * [no-preload-491604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:25:35.869638  498114 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:25:35.869698  498114 notify.go:220] Checking for updates...
	I1026 09:25:35.880129  498114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:25:35.883057  498114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:25:35.886009  498114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:25:35.889032  498114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:25:35.892088  498114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:25:35.895505  498114 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:35.895681  498114 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:25:35.957734  498114 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:25:35.957873  498114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:36.045842  498114 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:36.033414849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:36.045957  498114 docker.go:318] overlay module found
	I1026 09:25:36.049018  498114 out.go:179] * Using the docker driver based on user configuration
	I1026 09:25:36.051992  498114 start.go:305] selected driver: docker
	I1026 09:25:36.052014  498114 start.go:925] validating driver "docker" against <nil>
	I1026 09:25:36.052027  498114 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:25:36.052737  498114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:25:36.139186  498114 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:25:36.12845521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:25:36.139345  498114 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:25:36.139580  498114 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:25:36.142486  498114 out.go:179] * Using Docker driver with root privileges
	I1026 09:25:36.145395  498114 cni.go:84] Creating CNI manager for ""
	I1026 09:25:36.145470  498114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:36.145484  498114 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:25:36.145629  498114 start.go:349] cluster config:
	{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:25:36.149311  498114 out.go:179] * Starting "no-preload-491604" primary control-plane node in "no-preload-491604" cluster
	I1026 09:25:36.152270  498114 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:25:36.155323  498114 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:25:36.158237  498114 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:36.158327  498114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:25:36.158397  498114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:25:36.158433  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json: {Name:mk481d0f3a14dbeb53b9d7f07cbf272b4d272765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:36.159604  498114 cache.go:107] acquiring lock: {Name:mkdad500968e7139280738b23aa2f2a019253f5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.160418  498114 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 09:25:36.161189  498114 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.775006ms
	I1026 09:25:36.161986  498114 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 09:25:36.162037  498114 cache.go:107] acquiring lock: {Name:mk14bfa53cd66a6ca87d606642a3cbb2da8dfbc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.162869  498114 cache.go:107] acquiring lock: {Name:mk599bfcacc3fab2a4670e80f471bbbcaed32bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.163611  498114 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:36.164486  498114 cache.go:107] acquiring lock: {Name:mkec65762826ae78f9cb76c49217646d15db3a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.164589  498114 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 09:25:36.164713  498114 cache.go:107] acquiring lock: {Name:mk1911c569c908e58b6e7e7f80fbc6513309fcca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.164778  498114 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:36.164864  498114 cache.go:107] acquiring lock: {Name:mk439f753472c6d4dacbd31dbea66f1a2f133a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.163619  498114 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:36.166048  498114 cache.go:107] acquiring lock: {Name:mk7d0c8b8f0317e07f3637091202b09c4c80488b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.166182  498114 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:36.166446  498114 cache.go:107] acquiring lock: {Name:mk38cdae88a1b6a128486f22f7bf9cbf423409f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.166556  498114 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:36.167420  498114 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:36.170504  498114 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:36.172873  498114 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:36.173244  498114 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:36.173452  498114 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 09:25:36.173687  498114 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:36.174004  498114 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:36.178523  498114 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:36.192891  498114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:25:36.192917  498114 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:25:36.192931  498114 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:25:36.192954  498114 start.go:360] acquireMachinesLock for no-preload-491604: {Name:mkc6d58300c0451128c3270d72a7123ff4bec2e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:25:36.193056  498114 start.go:364] duration metric: took 86.278µs to acquireMachinesLock for "no-preload-491604"
	I1026 09:25:36.193082  498114 start.go:93] Provisioning new machine with config: &{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:25:36.193154  498114 start.go:125] createHost starting for "" (driver="docker")
	I1026 09:25:34.330425  494585 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.256174332s
	I1026 09:25:36.579030  494585 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.502768039s
	I1026 09:25:36.603177  494585 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:25:36.639017  494585 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:25:36.675852  494585 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:25:36.676482  494585 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-204381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:25:36.714387  494585 kubeadm.go:318] [bootstrap-token] Using token: x1pfgm.13dge2pkojzg84td
	I1026 09:25:36.719190  494585 out.go:252]   - Configuring RBAC rules ...
	I1026 09:25:36.719334  494585 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:25:36.740840  494585 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:25:36.762432  494585 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:25:36.767766  494585 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:25:36.783458  494585 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:25:36.793809  494585 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:25:36.990681  494585 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:25:37.713806  494585 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:25:38.015608  494585 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:25:38.020377  494585 kubeadm.go:318] 
	I1026 09:25:38.020475  494585 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:25:38.020482  494585 kubeadm.go:318] 
	I1026 09:25:38.020564  494585 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:25:38.020569  494585 kubeadm.go:318] 
	I1026 09:25:38.020596  494585 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:25:38.023673  494585 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:25:38.023741  494585 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:25:38.023747  494585 kubeadm.go:318] 
	I1026 09:25:38.023810  494585 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:25:38.023816  494585 kubeadm.go:318] 
	I1026 09:25:38.023866  494585 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:25:38.023871  494585 kubeadm.go:318] 
	I1026 09:25:38.023925  494585 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:25:38.024593  494585 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:25:38.024675  494585 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:25:38.024680  494585 kubeadm.go:318] 
	I1026 09:25:38.024769  494585 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:25:38.024850  494585 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:25:38.024854  494585 kubeadm.go:318] 
	I1026 09:25:38.024943  494585 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token x1pfgm.13dge2pkojzg84td \
	I1026 09:25:38.025051  494585 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:25:38.025074  494585 kubeadm.go:318] 	--control-plane 
	I1026 09:25:38.025079  494585 kubeadm.go:318] 
	I1026 09:25:38.025169  494585 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:25:38.025173  494585 kubeadm.go:318] 
	I1026 09:25:38.025259  494585 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token x1pfgm.13dge2pkojzg84td \
	I1026 09:25:38.028366  494585 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:25:38.062832  494585 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:25:38.063068  494585 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:25:38.063178  494585 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:25:38.063194  494585 cni.go:84] Creating CNI manager for ""
	I1026 09:25:38.063201  494585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:25:38.068713  494585 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:25:36.196342  498114 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:25:36.196632  498114 start.go:159] libmachine.API.Create for "no-preload-491604" (driver="docker")
	I1026 09:25:36.196664  498114 client.go:168] LocalClient.Create starting
	I1026 09:25:36.196727  498114 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:25:36.196760  498114 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:36.196777  498114 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:36.196832  498114 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:25:36.196849  498114 main.go:141] libmachine: Decoding PEM data...
	I1026 09:25:36.196858  498114 main.go:141] libmachine: Parsing certificate...
	I1026 09:25:36.197242  498114 cli_runner.go:164] Run: docker network inspect no-preload-491604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:25:36.234697  498114 cli_runner.go:211] docker network inspect no-preload-491604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:25:36.234839  498114 network_create.go:284] running [docker network inspect no-preload-491604] to gather additional debugging logs...
	I1026 09:25:36.234861  498114 cli_runner.go:164] Run: docker network inspect no-preload-491604
	W1026 09:25:36.253338  498114 cli_runner.go:211] docker network inspect no-preload-491604 returned with exit code 1
	I1026 09:25:36.253366  498114 network_create.go:287] error running [docker network inspect no-preload-491604]: docker network inspect no-preload-491604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-491604 not found
	I1026 09:25:36.253380  498114 network_create.go:289] output of [docker network inspect no-preload-491604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-491604 not found
	
	** /stderr **
	I1026 09:25:36.253484  498114 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:36.275860  498114 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:25:36.276260  498114 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:25:36.276607  498114 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:25:36.276916  498114 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-33c235a08e20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:5f:03:8c:80:bb} reservation:<nil>}
	I1026 09:25:36.277422  498114 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c9ef10}
	I1026 09:25:36.277446  498114 network_create.go:124] attempt to create docker network no-preload-491604 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 09:25:36.277507  498114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-491604 no-preload-491604
	I1026 09:25:36.360727  498114 network_create.go:108] docker network no-preload-491604 192.168.85.0/24 created
	I1026 09:25:36.360758  498114 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-491604" container
	I1026 09:25:36.360832  498114 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:25:36.379609  498114 cli_runner.go:164] Run: docker volume create no-preload-491604 --label name.minikube.sigs.k8s.io=no-preload-491604 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:25:36.400031  498114 oci.go:103] Successfully created a docker volume no-preload-491604
	I1026 09:25:36.400146  498114 cli_runner.go:164] Run: docker run --rm --name no-preload-491604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491604 --entrypoint /usr/bin/test -v no-preload-491604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:25:36.541594  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 09:25:36.543541  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 09:25:36.546387  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 09:25:36.548578  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 09:25:36.597156  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 09:25:36.614139  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 09:25:36.620801  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 09:25:36.620966  498114 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 456.483843ms
	I1026 09:25:36.621031  498114 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 09:25:36.621092  498114 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 09:25:37.108808  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 09:25:37.109778  498114 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 946.903773ms
	I1026 09:25:37.109834  498114 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 09:25:37.201273  498114 oci.go:107] Successfully prepared a docker volume no-preload-491604
	I1026 09:25:37.201351  498114 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1026 09:25:37.201533  498114 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:25:37.201689  498114 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:25:37.348805  498114 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-491604 --name no-preload-491604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-491604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-491604 --network no-preload-491604 --ip 192.168.85.2 --volume no-preload-491604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:25:37.645429  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 09:25:37.645632  498114 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.479188943s
	I1026 09:25:37.645965  498114 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 09:25:37.759597  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 09:25:37.759716  498114 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.594835676s
	I1026 09:25:37.759761  498114 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 09:25:37.774266  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 09:25:37.774295  498114 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.608253425s
	I1026 09:25:37.774308  498114 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 09:25:37.881672  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 09:25:37.881703  498114 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.719673067s
	I1026 09:25:37.881714  498114 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 09:25:37.949377  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Running}}
	I1026 09:25:37.984957  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:25:38.039224  498114 cli_runner.go:164] Run: docker exec no-preload-491604 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:25:38.128577  498114 oci.go:144] the created container "no-preload-491604" has a running status.
	I1026 09:25:38.128657  498114 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa...
	I1026 09:25:39.025495  498114 cache.go:157] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 09:25:39.025601  498114 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.860855763s
	I1026 09:25:39.025643  498114 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 09:25:39.025732  498114 cache.go:87] Successfully saved all images to host disk.
	I1026 09:25:39.238363  498114 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:25:39.280104  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:25:39.304563  498114 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:25:39.304583  498114 kic_runner.go:114] Args: [docker exec --privileged no-preload-491604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:25:39.382977  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:25:39.412955  498114 machine.go:93] provisionDockerMachine start ...
	I1026 09:25:39.413066  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:39.444444  498114 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:39.444781  498114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1026 09:25:39.444790  498114 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:25:39.445721  498114 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37940->127.0.0.1:33445: read: connection reset by peer
	I1026 09:25:38.072197  494585 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:25:38.104828  494585 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:25:38.104850  494585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:25:38.161528  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:25:39.555195  494585 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.393629159s)
	I1026 09:25:39.555239  494585 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:25:39.555353  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:39.555438  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-204381 minikube.k8s.io/updated_at=2025_10_26T09_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=embed-certs-204381 minikube.k8s.io/primary=true
	I1026 09:25:39.819411  494585 ops.go:34] apiserver oom_adj: -16
	I1026 09:25:39.819511  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:40.320511  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:40.819932  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:41.320128  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:41.819970  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:42.320473  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:42.820573  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:43.320557  494585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:25:43.625957  494585 kubeadm.go:1113] duration metric: took 4.070646486s to wait for elevateKubeSystemPrivileges
	I1026 09:25:43.626008  494585 kubeadm.go:402] duration metric: took 26.639271274s to StartCluster
	I1026 09:25:43.626033  494585 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:43.626117  494585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:25:43.628283  494585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:25:43.628709  494585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:25:43.628954  494585 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:25:43.629299  494585 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:25:43.629380  494585 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-204381"
	I1026 09:25:43.629402  494585 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-204381"
	I1026 09:25:43.629436  494585 host.go:66] Checking if "embed-certs-204381" exists ...
	I1026 09:25:43.630099  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:43.629226  494585 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:43.634019  494585 addons.go:69] Setting default-storageclass=true in profile "embed-certs-204381"
	I1026 09:25:43.634039  494585 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-204381"
	I1026 09:25:43.634328  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:43.642445  494585 out.go:179] * Verifying Kubernetes components...
	I1026 09:25:43.645829  494585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:43.702097  494585 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:43.705057  494585 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:25:43.705080  494585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:25:43.705149  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:43.713447  494585 addons.go:238] Setting addon default-storageclass=true in "embed-certs-204381"
	I1026 09:25:43.713492  494585 host.go:66] Checking if "embed-certs-204381" exists ...
	I1026 09:25:43.713917  494585 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:25:43.811034  494585 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:25:43.811055  494585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:25:43.811116  494585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:25:43.811404  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:43.873595  494585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:25:43.988376  494585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:25:44.055635  494585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:25:44.335905  494585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:25:44.349045  494585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:25:44.987584  494585 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 09:25:44.989677  494585 node_ready.go:35] waiting up to 6m0s for node "embed-certs-204381" to be "Ready" ...
	I1026 09:25:45.444109  494585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.108167138s)
	I1026 09:25:45.444186  494585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.095116983s)
	I1026 09:25:45.454149  494585 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:25:42.594114  498114 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:25:42.594140  498114 ubuntu.go:182] provisioning hostname "no-preload-491604"
	I1026 09:25:42.594204  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:42.620593  498114 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:42.620939  498114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1026 09:25:42.620958  498114 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-491604 && echo "no-preload-491604" | sudo tee /etc/hostname
	I1026 09:25:42.786388  498114 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:25:42.786544  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:42.805128  498114 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:42.805459  498114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1026 09:25:42.805481  498114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491604/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:25:42.971221  498114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:25:42.971312  498114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:25:42.971356  498114 ubuntu.go:190] setting up certificates
	I1026 09:25:42.971384  498114 provision.go:84] configureAuth start
	I1026 09:25:42.971474  498114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:25:42.998002  498114 provision.go:143] copyHostCerts
	I1026 09:25:42.998071  498114 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:25:42.998092  498114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:25:42.998169  498114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:25:42.998255  498114 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:25:42.998260  498114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:25:42.998284  498114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:25:42.998332  498114 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:25:42.998336  498114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:25:42.998358  498114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:25:42.998401  498114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.no-preload-491604 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-491604]
	I1026 09:25:44.246076  498114 provision.go:177] copyRemoteCerts
	I1026 09:25:44.246202  498114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:25:44.246264  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:44.264906  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:25:44.377399  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:25:44.408598  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:25:44.440567  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:25:44.461926  498114 provision.go:87] duration metric: took 1.490505317s to configureAuth
	I1026 09:25:44.461949  498114 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:25:44.462132  498114 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:25:44.462231  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:44.488384  498114 main.go:141] libmachine: Using SSH client type: native
	I1026 09:25:44.488695  498114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1026 09:25:44.488712  498114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:25:44.880785  498114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:25:44.880807  498114 machine.go:96] duration metric: took 5.467824391s to provisionDockerMachine
	I1026 09:25:44.880817  498114 client.go:171] duration metric: took 8.684147551s to LocalClient.Create
	I1026 09:25:44.880830  498114 start.go:167] duration metric: took 8.684201419s to libmachine.API.Create "no-preload-491604"
	I1026 09:25:44.880838  498114 start.go:293] postStartSetup for "no-preload-491604" (driver="docker")
	I1026 09:25:44.880848  498114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:25:44.880924  498114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:25:44.880966  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:44.905112  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:25:45.030005  498114 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:25:45.035473  498114 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:25:45.035510  498114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:25:45.035523  498114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:25:45.035608  498114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:25:45.035705  498114 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:25:45.035832  498114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:25:45.052645  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:25:45.087457  498114 start.go:296] duration metric: took 206.603531ms for postStartSetup
	I1026 09:25:45.087971  498114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:25:45.118362  498114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:25:45.118789  498114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:25:45.118849  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:45.160974  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:25:45.274135  498114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:25:45.281280  498114 start.go:128] duration metric: took 9.088107428s to createHost
	I1026 09:25:45.281312  498114 start.go:83] releasing machines lock for "no-preload-491604", held for 9.088245899s
	I1026 09:25:45.281405  498114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:25:45.318113  498114 ssh_runner.go:195] Run: cat /version.json
	I1026 09:25:45.318134  498114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:25:45.318172  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:45.318208  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:25:45.384359  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:25:45.385916  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:25:45.534631  498114 ssh_runner.go:195] Run: systemctl --version
	I1026 09:25:45.629792  498114 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:25:45.671302  498114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:25:45.675758  498114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:25:45.675835  498114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:25:45.707344  498114 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:25:45.707368  498114 start.go:495] detecting cgroup driver to use...
	I1026 09:25:45.707399  498114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:25:45.707452  498114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:25:45.724871  498114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:25:45.739054  498114 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:25:45.739121  498114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:25:45.757438  498114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:25:45.779556  498114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:25:45.457152  494585 addons.go:514] duration metric: took 1.827834005s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:25:45.493810  494585 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-204381" context rescaled to 1 replicas
	I1026 09:25:45.901606  498114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:25:46.048812  498114 docker.go:234] disabling docker service ...
	I1026 09:25:46.048929  498114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:25:46.077608  498114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:25:46.090867  498114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:25:46.221165  498114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:25:46.351060  498114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:25:46.363843  498114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:25:46.377879  498114 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:25:46.377992  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.387517  498114 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:25:46.387633  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.400801  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.409151  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.417819  498114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:25:46.426300  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.437247  498114 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.451238  498114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:25:46.460210  498114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:25:46.469326  498114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:25:46.476520  498114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:25:46.592795  498114 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:25:46.718621  498114 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:25:46.718705  498114 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:25:46.722696  498114 start.go:563] Will wait 60s for crictl version
	I1026 09:25:46.722861  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:46.726359  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:25:46.763885  498114 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:25:46.764045  498114 ssh_runner.go:195] Run: crio --version
	I1026 09:25:46.797762  498114 ssh_runner.go:195] Run: crio --version
	I1026 09:25:46.829663  498114 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:25:46.832451  498114 cli_runner.go:164] Run: docker network inspect no-preload-491604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:25:46.848921  498114 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:25:46.852722  498114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:25:46.862459  498114 kubeadm.go:883] updating cluster {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:25:46.862579  498114 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:25:46.862625  498114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:25:46.887161  498114 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 09:25:46.887187  498114 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 09:25:46.887242  498114 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:46.887260  498114 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:46.887438  498114 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:46.887441  498114 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 09:25:46.887464  498114 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:46.887525  498114 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:46.887555  498114 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:46.887611  498114 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:46.888843  498114 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 09:25:46.889089  498114 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:46.889251  498114 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:46.889408  498114 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:46.889571  498114 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:46.889885  498114 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:46.890064  498114 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:46.890322  498114 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.111758  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1026 09:25:47.121037  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.124513  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:47.134929  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:47.142623  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:47.144035  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:47.162925  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:47.197206  498114 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1026 09:25:47.197309  498114 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1026 09:25:47.197388  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.244371  498114 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1026 09:25:47.244562  498114 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:47.244457  498114 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1026 09:25:47.244637  498114 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.244676  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.244694  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.244528  498114 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1026 09:25:47.244841  498114 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:47.244881  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.297044  498114 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1026 09:25:47.297085  498114 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:47.297138  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.297195  498114 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1026 09:25:47.297222  498114 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1026 09:25:47.297242  498114 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:47.297270  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.297293  498114 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:47.297329  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 09:25:47.297375  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:47.297400  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:47.297451  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:47.297489  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.381879  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:47.381968  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.382020  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:47.382075  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 09:25:47.382135  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:47.382182  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:47.382237  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:47.485813  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 09:25:47.485892  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:47.485945  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 09:25:47.485995  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:47.486059  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 09:25:47.486124  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 09:25:47.486180  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:47.594668  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 09:25:47.594812  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 09:25:47.594910  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 09:25:47.595010  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 09:25:47.595088  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 09:25:47.595151  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1026 09:25:47.595215  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 09:25:47.595260  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 09:25:47.595305  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1026 09:25:47.595347  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 09:25:47.595391  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 09:25:47.664843  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1026 09:25:47.664923  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1026 09:25:47.665024  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 09:25:47.665115  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1026 09:25:47.665216  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1026 09:25:47.665239  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1026 09:25:47.665396  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 09:25:47.665454  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 09:25:47.665538  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1026 09:25:47.665628  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1026 09:25:47.665664  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 09:25:47.665718  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 09:25:47.681094  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1026 09:25:47.681130  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1026 09:25:47.681052  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1026 09:25:47.681522  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1026 09:25:47.740567  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1026 09:25:47.740808  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1026 09:25:47.740646  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1026 09:25:47.740931  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1026 09:25:47.860853  498114 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1026 09:25:47.860926  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1026 09:25:48.190311  498114 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1026 09:25:48.190700  498114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:48.319432  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1026 09:25:48.319613  498114 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 09:25:48.319743  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 09:25:48.378769  498114 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1026 09:25:48.378949  498114 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:48.379125  498114 ssh_runner.go:195] Run: which crictl
	I1026 09:25:50.061183  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.741296515s)
	I1026 09:25:50.061210  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1026 09:25:50.061229  498114 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 09:25:50.061281  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 09:25:50.061356  498114 ssh_runner.go:235] Completed: which crictl: (1.682178242s)
	I1026 09:25:50.061391  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1026 09:25:46.992406  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:25:48.992917  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:25:50.993510  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:25:51.346381  498114 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.284967106s)
	I1026 09:25:51.346493  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:51.346639  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.285343447s)
	I1026 09:25:51.346659  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1026 09:25:51.346677  498114 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 09:25:51.346733  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 09:25:51.381286  498114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:25:52.723892  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.377127096s)
	I1026 09:25:52.723920  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1026 09:25:52.723939  498114 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1026 09:25:52.723987  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1026 09:25:52.724076  498114 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.342725477s)
	I1026 09:25:52.724155  498114 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 09:25:52.724235  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 09:25:54.427318  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.703304103s)
	I1026 09:25:54.427345  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1026 09:25:54.427365  498114 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 09:25:54.427415  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 09:25:54.427487  498114 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.703230486s)
	I1026 09:25:54.427506  498114 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1026 09:25:54.427523  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1026 09:25:55.838867  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.411424801s)
	I1026 09:25:55.838898  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1026 09:25:55.838940  498114 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1026 09:25:55.838994  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1026 09:25:53.493436  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:25:55.992844  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:25:59.686509  498114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.847478308s)
	I1026 09:25:59.686537  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1026 09:25:59.686557  498114 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 09:25:59.686626  498114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 09:26:00.475330  498114 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 09:26:00.475417  498114 cache_images.go:124] Successfully loaded all cached images
	I1026 09:26:00.475433  498114 cache_images.go:93] duration metric: took 13.588230018s to LoadCachedImages
	I1026 09:26:00.475469  498114 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:26:00.475610  498114 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:26:00.475743  498114 ssh_runner.go:195] Run: crio config
	I1026 09:26:00.552778  498114 cni.go:84] Creating CNI manager for ""
	I1026 09:26:00.552844  498114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:26:00.552864  498114 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:26:00.552890  498114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491604 NodeName:no-preload-491604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:26:00.553019  498114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:26:00.553090  498114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:26:00.561799  498114 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1026 09:26:00.561870  498114 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1026 09:26:00.570170  498114 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1026 09:26:00.570409  498114 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1026 09:26:00.570571  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1026 09:26:00.570655  498114 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1026 09:26:00.575758  498114 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1026 09:26:00.575803  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	W1026 09:25:57.992949  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:25:59.993129  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:01.443170  498114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:26:01.461746  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1026 09:26:01.466907  498114 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1026 09:26:01.466953  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1026 09:26:01.557147  498114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1026 09:26:01.568848  498114 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1026 09:26:01.568938  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1026 09:26:02.152011  498114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:26:02.160902  498114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:26:02.183244  498114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:26:02.199340  498114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 09:26:02.214487  498114 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:26:02.218857  498114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:26:02.229373  498114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:26:02.362443  498114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:26:02.393745  498114 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604 for IP: 192.168.85.2
	I1026 09:26:02.393767  498114 certs.go:195] generating shared ca certs ...
	I1026 09:26:02.393784  498114 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:02.393944  498114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:26:02.393995  498114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:26:02.394010  498114 certs.go:257] generating profile certs ...
	I1026 09:26:02.394075  498114 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.key
	I1026 09:26:02.394094  498114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt with IP's: []
	I1026 09:26:03.483878  498114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt ...
	I1026 09:26:03.483910  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: {Name:mk9f6c34169c167c765bf7af66d7a90b050a7914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:03.484153  498114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.key ...
	I1026 09:26:03.484176  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.key: {Name:mk9a69506f00c66f8dc736f9dd4c04cd07fa8b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:03.484290  498114 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19
	I1026 09:26:03.484311  498114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt.1aa4df19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 09:26:03.865164  498114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt.1aa4df19 ...
	I1026 09:26:03.865194  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt.1aa4df19: {Name:mkf6f1d775480ef05a54181be2530e3c39084570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:03.865381  498114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19 ...
	I1026 09:26:03.865398  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19: {Name:mk4d8a83a4d3a9b2337625c8ce065e0c22a867d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:03.865475  498114 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt.1aa4df19 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt
	I1026 09:26:03.865557  498114 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key
	I1026 09:26:03.865616  498114 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key
	I1026 09:26:03.865633  498114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt with IP's: []
	I1026 09:26:04.237615  498114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt ...
	I1026 09:26:04.237650  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt: {Name:mk75d92db2f27c115999dca9c7e88b7fc653b7ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:04.237841  498114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key ...
	I1026 09:26:04.237860  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key: {Name:mkcad8e1bbc6f241947f700ad6fa6c25911439a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:04.238051  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:26:04.238095  498114 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:26:04.238109  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:26:04.238135  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:26:04.238161  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:26:04.238184  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:26:04.238231  498114 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:26:04.238815  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:26:04.258120  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:26:04.277968  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:26:04.300985  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:26:04.320454  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:26:04.339152  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:26:04.357609  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:26:04.381171  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:26:04.400414  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:26:04.419875  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:26:04.439079  498114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:26:04.457256  498114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:26:04.470865  498114 ssh_runner.go:195] Run: openssl version
	I1026 09:26:04.477123  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:26:04.485748  498114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:26:04.489634  498114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:26:04.489738  498114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:26:04.532790  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:26:04.541533  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:26:04.550195  498114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:26:04.554252  498114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:26:04.554314  498114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:26:04.597520  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:26:04.609287  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:26:04.621081  498114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:26:04.625541  498114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:26:04.625609  498114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:26:04.668796  498114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:26:04.677287  498114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:26:04.681145  498114 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:26:04.681203  498114 kubeadm.go:400] StartCluster: {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:26:04.681278  498114 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:26:04.681338  498114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:26:04.709883  498114 cri.go:89] found id: ""
	I1026 09:26:04.710031  498114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:26:04.718440  498114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:26:04.726846  498114 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:26:04.726912  498114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:26:04.734851  498114 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:26:04.734870  498114 kubeadm.go:157] found existing configuration files:
	
	I1026 09:26:04.734922  498114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:26:04.742820  498114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:26:04.742932  498114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:26:04.752653  498114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:26:04.764934  498114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:26:04.765001  498114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:26:04.773100  498114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:26:04.782260  498114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:26:04.782327  498114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:26:04.790370  498114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:26:04.801153  498114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:26:04.801222  498114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:26:04.810813  498114 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:26:04.854117  498114 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:26:04.854346  498114 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:26:04.879597  498114 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:26:04.879675  498114 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:26:04.879717  498114 kubeadm.go:318] OS: Linux
	I1026 09:26:04.879777  498114 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:26:04.879833  498114 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:26:04.879885  498114 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:26:04.879943  498114 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:26:04.879999  498114 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:26:04.880062  498114 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:26:04.880124  498114 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:26:04.880181  498114 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:26:04.880234  498114 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:26:04.957657  498114 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:26:04.957777  498114 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:26:04.957876  498114 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:26:04.972387  498114 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:26:04.979429  498114 out.go:252]   - Generating certificates and keys ...
	I1026 09:26:04.979605  498114 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:26:04.979718  498114 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:26:05.253166  498114 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:26:05.649325  498114 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:26:05.826449  498114 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	W1026 09:26:01.993366  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:26:04.493658  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:06.955836  498114 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:26:07.213813  498114 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:26:07.214169  498114 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-491604] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:26:07.459941  498114 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 09:26:07.460343  498114 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-491604] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 09:26:08.241659  498114 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:26:08.979553  498114 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:26:09.365643  498114 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:26:09.365733  498114 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:26:09.762652  498114 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:26:10.659161  498114 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	W1026 09:26:06.993078  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:26:08.994105  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:26:10.994852  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:11.376454  498114 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:26:11.937301  498114 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:26:12.455252  498114 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:26:12.455734  498114 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:26:12.458439  498114 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:26:12.462023  498114 out.go:252]   - Booting up control plane ...
	I1026 09:26:12.462222  498114 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:26:12.462350  498114 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:26:12.463995  498114 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:26:12.479991  498114 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:26:12.480196  498114 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:26:12.488294  498114 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:26:12.488752  498114 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:26:12.489097  498114 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:26:12.616682  498114 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 09:26:12.616878  498114 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 09:26:13.620914  498114 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003005672s
	I1026 09:26:13.624148  498114 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:26:13.624558  498114 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 09:26:13.624902  498114 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:26:13.625220  498114 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1026 09:26:13.493077  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:26:15.493768  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:17.475728  498114 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.850276967s
	I1026 09:26:20.403772  498114 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.778126545s
	I1026 09:26:20.627253  498114 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002223529s
	I1026 09:26:20.652061  498114 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:26:20.666051  498114 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:26:20.682591  498114 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:26:20.682843  498114 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-491604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:26:20.696479  498114 kubeadm.go:318] [bootstrap-token] Using token: 11ibim.cx3a17n08cwlmcib
	I1026 09:26:20.699606  498114 out.go:252]   - Configuring RBAC rules ...
	I1026 09:26:20.699820  498114 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:26:20.703615  498114 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:26:20.711747  498114 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:26:20.717851  498114 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:26:20.722152  498114 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:26:20.726111  498114 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	W1026 09:26:17.993924  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	W1026 09:26:20.493408  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:21.033881  498114 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:26:21.476158  498114 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:26:22.035061  498114 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:26:22.036226  498114 kubeadm.go:318] 
	I1026 09:26:22.036335  498114 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:26:22.036349  498114 kubeadm.go:318] 
	I1026 09:26:22.036442  498114 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:26:22.036453  498114 kubeadm.go:318] 
	I1026 09:26:22.036491  498114 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:26:22.036567  498114 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:26:22.036624  498114 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:26:22.036632  498114 kubeadm.go:318] 
	I1026 09:26:22.036697  498114 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:26:22.036714  498114 kubeadm.go:318] 
	I1026 09:26:22.036764  498114 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:26:22.036773  498114 kubeadm.go:318] 
	I1026 09:26:22.036827  498114 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:26:22.036907  498114 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:26:22.036982  498114 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:26:22.036990  498114 kubeadm.go:318] 
	I1026 09:26:22.038039  498114 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:26:22.038219  498114 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:26:22.038234  498114 kubeadm.go:318] 
	I1026 09:26:22.038331  498114 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 11ibim.cx3a17n08cwlmcib \
	I1026 09:26:22.038448  498114 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:26:22.038471  498114 kubeadm.go:318] 	--control-plane 
	I1026 09:26:22.038475  498114 kubeadm.go:318] 
	I1026 09:26:22.038571  498114 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:26:22.038575  498114 kubeadm.go:318] 
	I1026 09:26:22.038667  498114 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 11ibim.cx3a17n08cwlmcib \
	I1026 09:26:22.038810  498114 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:26:22.041377  498114 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:26:22.041647  498114 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:26:22.041771  498114 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:26:22.041811  498114 cni.go:84] Creating CNI manager for ""
	I1026 09:26:22.041824  498114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:26:22.044972  498114 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:26:22.047926  498114 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:26:22.053125  498114 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:26:22.053152  498114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:26:22.068607  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:26:22.383332  498114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:26:22.383493  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:22.383513  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-491604 minikube.k8s.io/updated_at=2025_10_26T09_26_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=no-preload-491604 minikube.k8s.io/primary=true
	I1026 09:26:22.647976  498114 ops.go:34] apiserver oom_adj: -16
	I1026 09:26:22.648122  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:23.149172  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:23.648882  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:24.148798  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:24.649073  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:25.148210  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:25.648639  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1026 09:26:22.493897  494585 node_ready.go:57] node "embed-certs-204381" has "Ready":"False" status (will retry)
	I1026 09:26:24.994005  494585 node_ready.go:49] node "embed-certs-204381" is "Ready"
	I1026 09:26:24.994090  494585 node_ready.go:38] duration metric: took 40.004379082s for node "embed-certs-204381" to be "Ready" ...
	I1026 09:26:24.994121  494585 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:26:24.994212  494585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:26:25.014880  494585 api_server.go:72] duration metric: took 41.385607457s to wait for apiserver process to appear ...
	I1026 09:26:25.014908  494585 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:26:25.014941  494585 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 09:26:25.025523  494585 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:26:25.028152  494585 api_server.go:141] control plane version: v1.34.1
	I1026 09:26:25.028178  494585 api_server.go:131] duration metric: took 13.262539ms to wait for apiserver health ...
	I1026 09:26:25.028187  494585 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:26:25.045850  494585 system_pods.go:59] 8 kube-system pods found
	I1026 09:26:25.045897  494585 system_pods.go:61] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Pending
	I1026 09:26:25.045904  494585 system_pods.go:61] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running
	I1026 09:26:25.045909  494585 system_pods.go:61] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:26:25.045913  494585 system_pods.go:61] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running
	I1026 09:26:25.045918  494585 system_pods.go:61] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running
	I1026 09:26:25.045922  494585 system_pods.go:61] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:26:25.045927  494585 system_pods.go:61] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running
	I1026 09:26:25.045931  494585 system_pods.go:61] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Pending
	I1026 09:26:25.045946  494585 system_pods.go:74] duration metric: took 17.743739ms to wait for pod list to return data ...
	I1026 09:26:25.045960  494585 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:26:25.058518  494585 default_sa.go:45] found service account: "default"
	I1026 09:26:25.058546  494585 default_sa.go:55] duration metric: took 12.579634ms for default service account to be created ...
	I1026 09:26:25.058557  494585 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:26:25.070112  494585 system_pods.go:86] 8 kube-system pods found
	I1026 09:26:25.070158  494585 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:26:25.070166  494585 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running
	I1026 09:26:25.070175  494585 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:26:25.070179  494585 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running
	I1026 09:26:25.070184  494585 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running
	I1026 09:26:25.070188  494585 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:26:25.070198  494585 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running
	I1026 09:26:25.070202  494585 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Pending
	I1026 09:26:25.070237  494585 retry.go:31] will retry after 274.021804ms: missing components: kube-dns
	I1026 09:26:25.353656  494585 system_pods.go:86] 8 kube-system pods found
	I1026 09:26:25.353706  494585 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:26:25.353714  494585 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running
	I1026 09:26:25.353721  494585 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:26:25.353725  494585 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running
	I1026 09:26:25.353730  494585 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running
	I1026 09:26:25.353734  494585 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:26:25.353738  494585 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running
	I1026 09:26:25.353744  494585 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:26:25.353767  494585 retry.go:31] will retry after 293.360084ms: missing components: kube-dns
	I1026 09:26:25.654467  494585 system_pods.go:86] 8 kube-system pods found
	I1026 09:26:25.654516  494585 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:26:25.654524  494585 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running
	I1026 09:26:25.654531  494585 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:26:25.654535  494585 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running
	I1026 09:26:25.654540  494585 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running
	I1026 09:26:25.654543  494585 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:26:25.654548  494585 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running
	I1026 09:26:25.654553  494585 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 09:26:25.654580  494585 retry.go:31] will retry after 448.739838ms: missing components: kube-dns
	I1026 09:26:26.108905  494585 system_pods.go:86] 8 kube-system pods found
	I1026 09:26:26.108933  494585 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Running
	I1026 09:26:26.108940  494585 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running
	I1026 09:26:26.108945  494585 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:26:26.108949  494585 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running
	I1026 09:26:26.108954  494585 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running
	I1026 09:26:26.108958  494585 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:26:26.108963  494585 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running
	I1026 09:26:26.108967  494585 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Running
	I1026 09:26:26.108976  494585 system_pods.go:126] duration metric: took 1.050412946s to wait for k8s-apps to be running ...
	I1026 09:26:26.108994  494585 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:26:26.109051  494585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:26:26.129155  494585 system_svc.go:56] duration metric: took 20.15355ms WaitForService to wait for kubelet
	I1026 09:26:26.129198  494585 kubeadm.go:586] duration metric: took 42.499930042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:26:26.129218  494585 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:26:26.132708  494585 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:26:26.132741  494585 node_conditions.go:123] node cpu capacity is 2
	I1026 09:26:26.132755  494585 node_conditions.go:105] duration metric: took 3.530811ms to run NodePressure ...
	I1026 09:26:26.132767  494585 start.go:241] waiting for startup goroutines ...
	I1026 09:26:26.132775  494585 start.go:246] waiting for cluster config update ...
	I1026 09:26:26.132787  494585 start.go:255] writing updated cluster config ...
	I1026 09:26:26.133096  494585 ssh_runner.go:195] Run: rm -f paused
	I1026 09:26:26.137147  494585 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:26:26.141255  494585 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.149217  494585 pod_ready.go:94] pod "coredns-66bc5c9577-r7mm4" is "Ready"
	I1026 09:26:26.149241  494585 pod_ready.go:86] duration metric: took 7.947156ms for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.158967  494585 pod_ready.go:83] waiting for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.165459  494585 pod_ready.go:94] pod "etcd-embed-certs-204381" is "Ready"
	I1026 09:26:26.165504  494585 pod_ready.go:86] duration metric: took 6.50036ms for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.170144  494585 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.177888  494585 pod_ready.go:94] pod "kube-apiserver-embed-certs-204381" is "Ready"
	I1026 09:26:26.177919  494585 pod_ready.go:86] duration metric: took 7.743928ms for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.190230  494585 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.541431  494585 pod_ready.go:94] pod "kube-controller-manager-embed-certs-204381" is "Ready"
	I1026 09:26:26.541461  494585 pod_ready.go:86] duration metric: took 351.200044ms for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:26.148927  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:26.649194  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:27.148473  498114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:26:27.260920  498114 kubeadm.go:1113] duration metric: took 4.877481611s to wait for elevateKubeSystemPrivileges
	I1026 09:26:27.260972  498114 kubeadm.go:402] duration metric: took 22.579772494s to StartCluster
	I1026 09:26:27.260991  498114 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:27.261075  498114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:26:27.262691  498114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:26:27.262985  498114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:26:27.262988  498114 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:26:27.263271  498114 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:26:27.263308  498114 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:26:27.263375  498114 addons.go:69] Setting storage-provisioner=true in profile "no-preload-491604"
	I1026 09:26:27.263391  498114 addons.go:238] Setting addon storage-provisioner=true in "no-preload-491604"
	I1026 09:26:27.263412  498114 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:26:27.263872  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:26:27.264246  498114 addons.go:69] Setting default-storageclass=true in profile "no-preload-491604"
	I1026 09:26:27.264266  498114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491604"
	I1026 09:26:27.264549  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:26:27.266236  498114 out.go:179] * Verifying Kubernetes components...
	I1026 09:26:27.270127  498114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:26:27.301426  498114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:26:26.742126  494585 pod_ready.go:83] waiting for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:27.141248  494585 pod_ready.go:94] pod "kube-proxy-75p8k" is "Ready"
	I1026 09:26:27.141274  494585 pod_ready.go:86] duration metric: took 399.071526ms for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:27.344404  494585 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:27.741159  494585 pod_ready.go:94] pod "kube-scheduler-embed-certs-204381" is "Ready"
	I1026 09:26:27.741191  494585 pod_ready.go:86] duration metric: took 396.760013ms for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:26:27.741204  494585 pod_ready.go:40] duration metric: took 1.604024698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:26:27.844831  494585 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:26:27.848206  494585 out.go:179] * Done! kubectl is now configured to use "embed-certs-204381" cluster and "default" namespace by default
	I1026 09:26:27.305129  498114 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:26:27.305151  498114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:26:27.305233  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:26:27.316194  498114 addons.go:238] Setting addon default-storageclass=true in "no-preload-491604"
	I1026 09:26:27.316241  498114 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:26:27.316701  498114 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:26:27.338678  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:26:27.368066  498114 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:26:27.368102  498114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:26:27.368173  498114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:26:27.396421  498114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:26:27.681960  498114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:26:27.701821  498114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:26:27.713259  498114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:26:27.713363  498114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:26:28.635058  498114 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 09:26:28.637881  498114 node_ready.go:35] waiting up to 6m0s for node "no-preload-491604" to be "Ready" ...
	I1026 09:26:28.639651  498114 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 09:26:28.642492  498114 addons.go:514] duration metric: took 1.37916498s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 09:26:29.140639  498114 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-491604" context rescaled to 1 replicas
	W1026 09:26:30.642143  498114 node_ready.go:57] node "no-preload-491604" has "Ready":"False" status (will retry)
	W1026 09:26:33.140945  498114 node_ready.go:57] node "no-preload-491604" has "Ready":"False" status (will retry)
	W1026 09:26:35.141082  498114 node_ready.go:57] node "no-preload-491604" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 09:26:25 embed-certs-204381 crio[841]: time="2025-10-26T09:26:25.436517683Z" level=info msg="Created container eeda887b1c7eb331572985f6f71439117c6f102a4659756e57c9f1d6e4924924: kube-system/coredns-66bc5c9577-r7mm4/coredns" id=560e6b80-f845-4c43-8ab4-3edbb2772963 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:26:25 embed-certs-204381 crio[841]: time="2025-10-26T09:26:25.440123293Z" level=info msg="Starting container: eeda887b1c7eb331572985f6f71439117c6f102a4659756e57c9f1d6e4924924" id=c051ccee-168a-4eb7-a00e-f38d97ba0005 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:26:25 embed-certs-204381 crio[841]: time="2025-10-26T09:26:25.450566333Z" level=info msg="Started container" PID=1737 containerID=eeda887b1c7eb331572985f6f71439117c6f102a4659756e57c9f1d6e4924924 description=kube-system/coredns-66bc5c9577-r7mm4/coredns id=c051ccee-168a-4eb7-a00e-f38d97ba0005 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cd333316c7ac09eb2e01a342e2367b1918e8a0bc2b540e279824f476b2dc898
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.478642262Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bd8656eb-b570-4444-b11d-555b5162c95e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.478750473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.491054198Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a UID:7bfa86d2-0d6d-4a37-b944-03fd17347db8 NetNS:/var/run/netns/9ee89d6d-caef-44fe-9c38-e80ddfcb6bc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ccb0}] Aliases:map[]}"
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.49109396Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.501148672Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a UID:7bfa86d2-0d6d-4a37-b944-03fd17347db8 NetNS:/var/run/netns/9ee89d6d-caef-44fe-9c38-e80ddfcb6bc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ccb0}] Aliases:map[]}"
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.501472491Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.508548751Z" level=info msg="Ran pod sandbox 0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a with infra container: default/busybox/POD" id=bd8656eb-b570-4444-b11d-555b5162c95e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.509793797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee077f8a-8cf9-45d8-aa05-749db558c3d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.510018309Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ee077f8a-8cf9-45d8-aa05-749db558c3d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.510119315Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ee077f8a-8cf9-45d8-aa05-749db558c3d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.511182008Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9af69f5-da23-46ae-969c-dc39b45764c5 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:26:28 embed-certs-204381 crio[841]: time="2025-10-26T09:26:28.51358405Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.713266809Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c9af69f5-da23-46ae-969c-dc39b45764c5 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.714367467Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=378dd3f7-9520-4584-b691-076370c95d19 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.718651511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=688c4c28-b547-41b8-9dd6-536134b8ab40 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.726569808Z" level=info msg="Creating container: default/busybox/busybox" id=c5f69bf4-a7e9-40ab-bcd3-1f6f3ec13bcb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.726747862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.732061104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.732572274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.755428284Z" level=info msg="Created container 608298c61880e2d69c6a97f224d2115c16f30f6be6b271490d240b2038706c6f: default/busybox/busybox" id=c5f69bf4-a7e9-40ab-bcd3-1f6f3ec13bcb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.759277663Z" level=info msg="Starting container: 608298c61880e2d69c6a97f224d2115c16f30f6be6b271490d240b2038706c6f" id=b7890dc1-8cf7-4e99-b05a-e28226c83c96 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:26:30 embed-certs-204381 crio[841]: time="2025-10-26T09:26:30.765193662Z" level=info msg="Started container" PID=1798 containerID=608298c61880e2d69c6a97f224d2115c16f30f6be6b271490d240b2038706c6f description=default/busybox/busybox id=b7890dc1-8cf7-4e99-b05a-e28226c83c96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	608298c61880e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   0d1bb994feaf5       busybox                                      default
	eeda887b1c7eb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   4cd333316c7ac       coredns-66bc5c9577-r7mm4                     kube-system
	bef3502eb631d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   0f1c66bd77f1f       storage-provisioner                          kube-system
	ed7e677e7ccd4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   57a3e7466d5de       kindnet-dcxxb                                kube-system
	ee9e55f69d8dc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   c93144fcd713e       kube-proxy-75p8k                             kube-system
	bf7589be35cd5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   c3b21d312d459       kube-controller-manager-embed-certs-204381   kube-system
	ea5cae375874f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8ce215f64bbc6       kube-scheduler-embed-certs-204381            kube-system
	df94c24314987       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   d622192f069b5       etcd-embed-certs-204381                      kube-system
	e8e1dad5b198a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   2c633cfc277d8       kube-apiserver-embed-certs-204381            kube-system
	
	
	==> coredns [eeda887b1c7eb331572985f6f71439117c6f102a4659756e57c9f1d6e4924924] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39800 - 65273 "HINFO IN 2642651273866208965.9172625376383071740. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014409541s
	
	
	==> describe nodes <==
	Name:               embed-certs-204381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-204381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-204381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-204381
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:26:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:26:24 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:26:24 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:26:24 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:26:24 +0000   Sun, 26 Oct 2025 09:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-204381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4c29094e-f18a-4ac6-86a6-71f16f27aacd
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-r7mm4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-204381                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-dcxxb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-204381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-204381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-75p8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-204381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-204381 event: Registered Node embed-certs-204381 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-204381 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [df94c243149872c63f744d6c25027f7c3e45076c24ebb9e491d2c967eac6fa0e] <==
	{"level":"warn","ts":"2025-10-26T09:25:32.927601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:32.958966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:32.970025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.006261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.040561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.061286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.078337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.099870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.112633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.153301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.163016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.183412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.207431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.219892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.233522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.249267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.271789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.280787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.299284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.311944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.331275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.359069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.372787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.387875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:25:33.462972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:26:39 up  3:09,  0 user,  load average: 4.70, 3.89, 3.07
	Linux embed-certs-204381 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed7e677e7ccd485dbd1c6e3815dc49a78512d414f755f9fa7a86c8f586ba0d0b] <==
	I1026 09:25:44.314950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:25:44.315691       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:25:44.318858       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:25:44.318884       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:25:44.318902       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:25:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:25:44.599683       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:25:44.599712       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:25:44.599723       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:25:44.600557       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:26:14.600523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:26:14.600637       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:26:14.600713       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:26:14.600746       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 09:26:16.299893       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:26:16.300003       1 metrics.go:72] Registering metrics
	I1026 09:26:16.300108       1 controller.go:711] "Syncing nftables rules"
	I1026 09:26:24.538301       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:26:24.538452       1 main.go:301] handling current node
	I1026 09:26:34.531234       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:26:34.531272       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e8e1dad5b198a70f31b534893155bed7069965ad0453c992bc486e9557d1b1e2] <==
	I1026 09:25:34.297971       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:25:34.355063       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:25:34.360118       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:25:34.386510       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:25:34.412738       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:25:34.413116       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 09:25:34.434633       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:25:34.437485       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:25:35.048574       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 09:25:35.058556       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 09:25:35.060248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:25:36.176338       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:25:36.241816       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:25:36.395964       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 09:25:36.405610       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 09:25:36.407370       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:25:36.413209       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:25:37.257690       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:25:37.585694       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:25:37.705200       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 09:25:37.753076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:25:43.049835       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:25:43.059642       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:25:43.095200       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:25:43.198600       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [bf7589be35cd5d94c4a0c0e07ac82a0745b83ba5d069439cfabd5154c19e3447] <==
	I1026 09:25:42.325496       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 09:25:42.332488       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:25:42.334462       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:25:42.335373       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:25:42.337778       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:25:42.337941       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:25:42.338184       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 09:25:42.338239       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:25:42.338306       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:25:42.341163       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:25:42.341823       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:25:42.342518       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:25:42.346405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:25:42.346501       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 09:25:42.346518       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:25:42.346527       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 09:25:42.347575       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:25:42.353885       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:25:42.363916       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:25:42.364644       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:25:42.365037       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:25:42.365454       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-204381"
	I1026 09:25:42.365572       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 09:25:42.385447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-204381" podCIDRs=["10.244.0.0/24"]
	I1026 09:26:27.372429       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ee9e55f69d8dc052647f9c6db3c23ff0e7200d8b4f0c55b93ab27c6098fb31fa] <==
	I1026 09:25:44.285904       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:25:44.449260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:25:44.552197       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:25:44.552235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:25:44.552310       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:25:44.617353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:25:44.617412       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:25:44.625228       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:25:44.625765       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:25:44.625782       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:25:44.649325       1 config.go:309] "Starting node config controller"
	I1026 09:25:44.649348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:25:44.650220       1 config.go:200] "Starting service config controller"
	I1026 09:25:44.650229       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:25:44.650244       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:25:44.650248       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:25:44.650260       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:25:44.650264       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:25:44.750875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:25:44.750927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:25:44.750939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:25:44.750974       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ea5cae375874f9599756ec63e3044ce56ba56e3c0bb1fe482a6f2c4ea28c535e] <==
	E1026 09:25:34.345042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:25:34.354582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:25:34.354700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:25:34.354851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:25:34.354916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:25:34.354967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:25:34.359002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:25:34.360777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:25:34.361620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:25:34.361775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:25:34.361883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:25:34.362005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:25:34.362071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:25:34.362118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:25:35.173009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:25:35.301274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:25:35.312659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 09:25:35.317598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:25:35.367350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:25:35.476312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:25:35.500857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:25:35.665488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:25:35.679896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:25:35.732949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1026 09:25:38.220872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:25:39 embed-certs-204381 kubelet[1313]: I1026 09:25:39.479847    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-204381" podStartSLOduration=1.479829161 podStartE2EDuration="1.479829161s" podCreationTimestamp="2025-10-26 09:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:25:39.449497158 +0000 UTC m=+1.985610020" watchObservedRunningTime="2025-10-26 09:25:39.479829161 +0000 UTC m=+2.015942015"
	Oct 26 09:25:42 embed-certs-204381 kubelet[1313]: I1026 09:25:42.359102    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 09:25:42 embed-certs-204381 kubelet[1313]: I1026 09:25:42.359754    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395117    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3-xtables-lock\") pod \"kindnet-dcxxb\" (UID: \"5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3\") " pod="kube-system/kindnet-dcxxb"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395188    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3-lib-modules\") pod \"kindnet-dcxxb\" (UID: \"5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3\") " pod="kube-system/kindnet-dcxxb"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395303    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3-cni-cfg\") pod \"kindnet-dcxxb\" (UID: \"5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3\") " pod="kube-system/kindnet-dcxxb"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395329    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wvx\" (UniqueName: \"kubernetes.io/projected/5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3-kube-api-access-j2wvx\") pod \"kindnet-dcxxb\" (UID: \"5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3\") " pod="kube-system/kindnet-dcxxb"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395406    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65c22908-92e3-48d1-a15d-8c695de4420a-kube-proxy\") pod \"kube-proxy-75p8k\" (UID: \"65c22908-92e3-48d1-a15d-8c695de4420a\") " pod="kube-system/kube-proxy-75p8k"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395481    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65c22908-92e3-48d1-a15d-8c695de4420a-lib-modules\") pod \"kube-proxy-75p8k\" (UID: \"65c22908-92e3-48d1-a15d-8c695de4420a\") " pod="kube-system/kube-proxy-75p8k"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395551    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65c22908-92e3-48d1-a15d-8c695de4420a-xtables-lock\") pod \"kube-proxy-75p8k\" (UID: \"65c22908-92e3-48d1-a15d-8c695de4420a\") " pod="kube-system/kube-proxy-75p8k"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.395599    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtzw\" (UniqueName: \"kubernetes.io/projected/65c22908-92e3-48d1-a15d-8c695de4420a-kube-api-access-cmtzw\") pod \"kube-proxy-75p8k\" (UID: \"65c22908-92e3-48d1-a15d-8c695de4420a\") " pod="kube-system/kube-proxy-75p8k"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: I1026 09:25:43.611568    1313 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:25:43 embed-certs-204381 kubelet[1313]: W1026 09:25:43.898097    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-57a3e7466d5deb815af9795599b8b701ad762e2d32a814fcfcb9b9af0d3e19b3 WatchSource:0}: Error finding container 57a3e7466d5deb815af9795599b8b701ad762e2d32a814fcfcb9b9af0d3e19b3: Status 404 returned error can't find the container with id 57a3e7466d5deb815af9795599b8b701ad762e2d32a814fcfcb9b9af0d3e19b3
	Oct 26 09:25:44 embed-certs-204381 kubelet[1313]: I1026 09:25:44.825483    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75p8k" podStartSLOduration=1.825459658 podStartE2EDuration="1.825459658s" podCreationTimestamp="2025-10-26 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:25:44.772888761 +0000 UTC m=+7.309001606" watchObservedRunningTime="2025-10-26 09:25:44.825459658 +0000 UTC m=+7.361572503"
	Oct 26 09:25:44 embed-certs-204381 kubelet[1313]: I1026 09:25:44.865423    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dcxxb" podStartSLOduration=1.86540397 podStartE2EDuration="1.86540397s" podCreationTimestamp="2025-10-26 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:25:44.86504989 +0000 UTC m=+7.401162759" watchObservedRunningTime="2025-10-26 09:25:44.86540397 +0000 UTC m=+7.401516823"
	Oct 26 09:26:24 embed-certs-204381 kubelet[1313]: I1026 09:26:24.977626    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: I1026 09:26:25.102997    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c074f51-2576-4be7-8643-1ee880c3182d-config-volume\") pod \"coredns-66bc5c9577-r7mm4\" (UID: \"4c074f51-2576-4be7-8643-1ee880c3182d\") " pod="kube-system/coredns-66bc5c9577-r7mm4"
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: I1026 09:26:25.103054    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ed81b53-0c23-47f0-9e38-122cd2bf5f0a-tmp\") pod \"storage-provisioner\" (UID: \"0ed81b53-0c23-47f0-9e38-122cd2bf5f0a\") " pod="kube-system/storage-provisioner"
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: I1026 09:26:25.103081    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkgbb\" (UniqueName: \"kubernetes.io/projected/0ed81b53-0c23-47f0-9e38-122cd2bf5f0a-kube-api-access-gkgbb\") pod \"storage-provisioner\" (UID: \"0ed81b53-0c23-47f0-9e38-122cd2bf5f0a\") " pod="kube-system/storage-provisioner"
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: I1026 09:26:25.103104    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzm6\" (UniqueName: \"kubernetes.io/projected/4c074f51-2576-4be7-8643-1ee880c3182d-kube-api-access-xjzm6\") pod \"coredns-66bc5c9577-r7mm4\" (UID: \"4c074f51-2576-4be7-8643-1ee880c3182d\") " pod="kube-system/coredns-66bc5c9577-r7mm4"
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: W1026 09:26:25.379044    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-4cd333316c7ac09eb2e01a342e2367b1918e8a0bc2b540e279824f476b2dc898 WatchSource:0}: Error finding container 4cd333316c7ac09eb2e01a342e2367b1918e8a0bc2b540e279824f476b2dc898: Status 404 returned error can't find the container with id 4cd333316c7ac09eb2e01a342e2367b1918e8a0bc2b540e279824f476b2dc898
	Oct 26 09:26:25 embed-certs-204381 kubelet[1313]: I1026 09:26:25.855946    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.855925956 podStartE2EDuration="40.855925956s" podCreationTimestamp="2025-10-26 09:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:26:25.855860938 +0000 UTC m=+48.391973791" watchObservedRunningTime="2025-10-26 09:26:25.855925956 +0000 UTC m=+48.392038801"
	Oct 26 09:26:28 embed-certs-204381 kubelet[1313]: I1026 09:26:28.167628    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r7mm4" podStartSLOduration=45.167606563 podStartE2EDuration="45.167606563s" podCreationTimestamp="2025-10-26 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:26:25.876868579 +0000 UTC m=+48.412981440" watchObservedRunningTime="2025-10-26 09:26:28.167606563 +0000 UTC m=+50.703719416"
	Oct 26 09:26:28 embed-certs-204381 kubelet[1313]: I1026 09:26:28.224422    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbcvv\" (UniqueName: \"kubernetes.io/projected/7bfa86d2-0d6d-4a37-b944-03fd17347db8-kube-api-access-gbcvv\") pod \"busybox\" (UID: \"7bfa86d2-0d6d-4a37-b944-03fd17347db8\") " pod="default/busybox"
	Oct 26 09:26:28 embed-certs-204381 kubelet[1313]: W1026 09:26:28.507053    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a WatchSource:0}: Error finding container 0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a: Status 404 returned error can't find the container with id 0d1bb994feaf531af1f948f66e4a7dd9163208bcb08b0c0e66ee79627df7ca8a
	
	
	==> storage-provisioner [bef3502eb631d0ea30b4b55c5c1978ab4e0e052fa26d14627d2ce3faa89e1499] <==
	I1026 09:26:25.438108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:26:25.454070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:26:25.454120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:26:25.462464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:25.480053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:26:25.480337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:26:25.480573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_b0bf9da7-b692-47da-a1be-8e33b91c4d86!
	I1026 09:26:25.481696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff3b783c-a30e-49f8-b18c-92455e17892c", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-204381_b0bf9da7-b692-47da-a1be-8e33b91c4d86 became leader
	W1026 09:26:25.493615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:25.507042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:26:25.580903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_b0bf9da7-b692-47da-a1be-8e33b91c4d86!
	W1026 09:26:27.510254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:27.516667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:29.520113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:29.524732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:31.528512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:31.533598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:33.536812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:33.557092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:35.560578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:35.564937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:37.577941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:37.585178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-204381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (330.024557ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:26:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-491604 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-491604 describe deploy/metrics-server -n kube-system: exit status 1 (103.249468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-491604 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-491604
helpers_test.go:243: (dbg) docker inspect no-preload-491604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	        "Created": "2025-10-26T09:25:37.402820807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:25:37.537353511Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hosts",
	        "LogPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db-json.log",
	        "Name": "/no-preload-491604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	                "LowerDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491604",
	                "Source": "/var/lib/docker/volumes/no-preload-491604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491604",
	                "name.minikube.sigs.k8s.io": "no-preload-491604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d76352e25ae664b24909cd53de54e43c6a5d32aae0146d3f6d2135019c37085",
	            "SandboxKey": "/var/run/docker/netns/2d76352e25ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:7a:95:e6:f7:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b3fac8619483e027c0a41271c69d710c2df0c76a965d01b990e19e9b1b9a2bd",
	                    "EndpointID": "b41fc48e590a70c49dd283e78ad1f6cbbe05bcf286d45de43c9129b33e7d9fd8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491604",
	                        "0b11d1185923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-491604 logs -n 25: (1.46800574s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:22 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-167519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │                     │
	│ stop    │ -p old-k8s-version-167519 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:23 UTC │
	│ start   │ -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:23 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                                                                                               │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                                                                                               │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:26:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:26:52.314020  502650 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:26:52.314208  502650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:26:52.314240  502650 out.go:374] Setting ErrFile to fd 2...
	I1026 09:26:52.314262  502650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:26:52.314535  502650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:26:52.314989  502650 out.go:368] Setting JSON to false
	I1026 09:26:52.315984  502650 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11363,"bootTime":1761459450,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:26:52.316097  502650 start.go:141] virtualization:  
	I1026 09:26:52.319302  502650 out.go:179] * [embed-certs-204381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:26:52.323280  502650 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:26:52.323383  502650 notify.go:220] Checking for updates...
	I1026 09:26:52.329188  502650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:26:52.332302  502650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:26:52.335263  502650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:26:52.338226  502650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:26:52.341108  502650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:26:52.344809  502650 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:26:52.345509  502650 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:26:52.384066  502650 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:26:52.384318  502650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:26:52.450637  502650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:26:52.440752871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:26:52.450899  502650 docker.go:318] overlay module found
	I1026 09:26:52.454069  502650 out.go:179] * Using the docker driver based on existing profile
	I1026 09:26:52.457086  502650 start.go:305] selected driver: docker
	I1026 09:26:52.457173  502650 start.go:925] validating driver "docker" against &{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:26:52.457306  502650 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:26:52.458242  502650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:26:52.513595  502650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:26:52.504450759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:26:52.513937  502650 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:26:52.513971  502650 cni.go:84] Creating CNI manager for ""
	I1026 09:26:52.514028  502650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:26:52.514077  502650 start.go:349] cluster config:
	{Name:embed-certs-204381 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-204381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:26:52.517240  502650 out.go:179] * Starting "embed-certs-204381" primary control-plane node in "embed-certs-204381" cluster
	I1026 09:26:52.520114  502650 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:26:52.523112  502650 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:26:52.525898  502650 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:26:52.525979  502650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:26:52.526033  502650 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:26:52.526052  502650 cache.go:58] Caching tarball of preloaded images
	I1026 09:26:52.526152  502650 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:26:52.526163  502650 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:26:52.526272  502650 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/embed-certs-204381/config.json ...
	I1026 09:26:52.546033  502650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:26:52.546054  502650 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:26:52.546067  502650 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:26:52.546091  502650 start.go:360] acquireMachinesLock for embed-certs-204381: {Name:mkd161c65630ff13edac2ff621a7dae8e5ffecd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:26:52.546142  502650 start.go:364] duration metric: took 32.837µs to acquireMachinesLock for "embed-certs-204381"
	I1026 09:26:52.546160  502650 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:26:52.546166  502650 fix.go:54] fixHost starting: 
	I1026 09:26:52.546419  502650 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:26:52.566285  502650 fix.go:112] recreateIfNeeded on embed-certs-204381: state=Stopped err=<nil>
	W1026 09:26:52.566314  502650 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 26 09:26:43 no-preload-491604 crio[839]: time="2025-10-26T09:26:43.753565613Z" level=info msg="Starting container: 83b416fadb0deb164ccc3017068b1fe0e5c2b91e3d4d5f2de17a2ceb0c5cf6ab" id=eb704a55-d6a6-4174-888d-eaaf7dbddaa1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:26:43 no-preload-491604 crio[839]: time="2025-10-26T09:26:43.769950015Z" level=info msg="Started container" PID=2486 containerID=83b416fadb0deb164ccc3017068b1fe0e5c2b91e3d4d5f2de17a2ceb0c5cf6ab description=kube-system/coredns-66bc5c9577-2rq75/coredns id=eb704a55-d6a6-4174-888d-eaaf7dbddaa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6aa1e4eab4aa84634020b99e14a5fa397ebaa91b9035d510235fe0aa1c224329
	Oct 26 09:26:43 no-preload-491604 crio[839]: time="2025-10-26T09:26:43.770082931Z" level=info msg="Started container" PID=2485 containerID=b76225f1bf50a550aa99c689a4c66c49fc855876b486c4ca2c0e1e357b1663d6 description=kube-system/storage-provisioner/storage-provisioner id=4a7e0d2f-2ffa-4085-b8c9-d4edc7c602f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b99acb04fdf7a0e68c93697518b8721f93c242bc200f5e3683ac53807ae0634
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.389166087Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fec1c648-6581-44b2-8ab7-5a18ec589d02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.389239007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.394532992Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2e385470a4cecbeec1721c3b5e3be8ed2daa32965f639f67ca6e8a4e02c05c42 UID:9e3cede7-8f2e-49cf-bdc2-b16fe5818763 NetNS:/var/run/netns/cd77dda8-faaf-43ef-a588-355ecfe7fe7f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d2c8}] Aliases:map[]}"
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.394734537Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.405894527Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2e385470a4cecbeec1721c3b5e3be8ed2daa32965f639f67ca6e8a4e02c05c42 UID:9e3cede7-8f2e-49cf-bdc2-b16fe5818763 NetNS:/var/run/netns/cd77dda8-faaf-43ef-a588-355ecfe7fe7f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d2c8}] Aliases:map[]}"
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.406048145Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.408714444Z" level=info msg="Ran pod sandbox 2e385470a4cecbeec1721c3b5e3be8ed2daa32965f639f67ca6e8a4e02c05c42 with infra container: default/busybox/POD" id=fec1c648-6581-44b2-8ab7-5a18ec589d02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.411821053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a768b0b4-613f-4054-8d15-5b78617b8c4a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.411951492Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a768b0b4-613f-4054-8d15-5b78617b8c4a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.411990557Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a768b0b4-613f-4054-8d15-5b78617b8c4a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.412817275Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=89f4d56a-c795-4401-98e6-94deb19b5ba9 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:26:47 no-preload-491604 crio[839]: time="2025-10-26T09:26:47.41419902Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.420735754Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=89f4d56a-c795-4401-98e6-94deb19b5ba9 name=/runtime.v1.ImageService/PullImage
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.421317809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=033d8f4d-da9b-421d-9960-260487056cbf name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.422854255Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=350e221a-78b1-4518-b01c-af29d5e5c2da name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.427891695Z" level=info msg="Creating container: default/busybox/busybox" id=b3a7638d-0562-40ff-9e2a-e43d8c91f5e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.428010786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.445969289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.446656674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.465051794Z" level=info msg="Created container aa8874f46d7a4c58721800805b6f80e76cb6651b360a7e6bb87f4fb0661384b9: default/busybox/busybox" id=b3a7638d-0562-40ff-9e2a-e43d8c91f5e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.466214828Z" level=info msg="Starting container: aa8874f46d7a4c58721800805b6f80e76cb6651b360a7e6bb87f4fb0661384b9" id=21e53c0d-f75f-4bc1-9962-5ba1ee11f91b name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:26:49 no-preload-491604 crio[839]: time="2025-10-26T09:26:49.468189804Z" level=info msg="Started container" PID=2545 containerID=aa8874f46d7a4c58721800805b6f80e76cb6651b360a7e6bb87f4fb0661384b9 description=default/busybox/busybox id=21e53c0d-f75f-4bc1-9962-5ba1ee11f91b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e385470a4cecbeec1721c3b5e3be8ed2daa32965f639f67ca6e8a4e02c05c42
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	aa8874f46d7a4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   2e385470a4cec       busybox                                     default
	83b416fadb0de       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   6aa1e4eab4aa8       coredns-66bc5c9577-2rq75                    kube-system
	b76225f1bf50a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   3b99acb04fdf7       storage-provisioner                         kube-system
	fa983d9bfefc5       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   cce50c88172db       kindnet-4g8pl                               kube-system
	f52c4c739d86f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   4dc52398cd13d       kube-proxy-tpv97                            kube-system
	82dfb9b58f3a8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      43 seconds ago      Running             kube-controller-manager   0                   fcc188d009f71       kube-controller-manager-no-preload-491604   kube-system
	a65f7da642c80       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      43 seconds ago      Running             kube-scheduler            0                   4085e98adf410       kube-scheduler-no-preload-491604            kube-system
	79c138b3a2171       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      43 seconds ago      Running             kube-apiserver            0                   f2c77da314b89       kube-apiserver-no-preload-491604            kube-system
	482b5e0f3b698       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      43 seconds ago      Running             etcd                      0                   e36a417225615       etcd-no-preload-491604                      kube-system
	
	
	==> coredns [83b416fadb0deb164ccc3017068b1fe0e5c2b91e3d4d5f2de17a2ceb0c5cf6ab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36293 - 42338 "HINFO IN 4744991132054666897.7572054317639259230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02339745s
	
	
	==> describe nodes <==
	Name:               no-preload-491604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-491604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-491604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_26_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-491604
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:26:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:26:52 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:26:52 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:26:52 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:26:52 +0000   Sun, 26 Oct 2025 09:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-491604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d17013c2-3271-42c0-8ce8-feb077b52c71
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-2rq75                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-491604                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-4g8pl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-491604             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-491604    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-tpv97                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-491604             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s                kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-491604 event: Registered Node no-preload-491604 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-491604 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 09:01] overlayfs: idmapped layers are currently not supported
	[Oct26 09:02] overlayfs: idmapped layers are currently not supported
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [482b5e0f3b69842a531280ef3aefaefff75fa577a80ac5220812ab84913d6fa4] <==
	{"level":"warn","ts":"2025-10-26T09:26:16.918862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:16.937251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:16.959081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:16.993607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.003253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.013478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.035385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.068692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.072843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.092981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.113593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.132227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.148637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.165357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.183554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.199577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.220231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.240216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.274330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.290829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.322965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.366612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.425866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.439913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:26:17.539818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50470","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:26:58 up  3:09,  0 user,  load average: 3.49, 3.67, 3.02
	Linux no-preload-491604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa983d9bfefc537414df708333840528a0accc5dd00426b62ad4a17b6b4156b4] <==
	I1026 09:26:32.406459       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:26:32.407171       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:26:32.407317       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:26:32.407337       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:26:32.407348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:26:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:26:32.701016       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:26:32.701045       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:26:32.701056       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:26:32.701175       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 09:26:32.803792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:26:32.803872       1 metrics.go:72] Registering metrics
	I1026 09:26:32.803942       1 controller.go:711] "Syncing nftables rules"
	I1026 09:26:42.706814       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:26:42.706940       1 main.go:301] handling current node
	I1026 09:26:52.699934       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:26:52.699968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79c138b3a217120daa86ef0d645934a57ee8a795f4eac49f9454f4508b58eb13] <==
	I1026 09:26:18.538970       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:26:18.539010       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:26:18.540639       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:26:18.587101       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:26:18.588041       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:26:18.588109       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 09:26:18.608859       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:26:18.609467       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:26:19.245591       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 09:26:19.253117       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 09:26:19.253203       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:26:20.276865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:26:20.394861       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:26:20.548167       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 09:26:20.562447       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 09:26:20.563440       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:26:20.573121       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:26:21.363749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:26:21.454527       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:26:21.475273       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 09:26:21.490628       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:26:26.661557       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:26:27.117151       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:26:27.122533       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:26:27.473690       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [82dfb9b58f3a8bb04f1f570e0ef709beba2f46ff008dc9e3fd9a42dd7d618e54] <==
	I1026 09:26:26.370257       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 09:26:26.380151       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-491604" podCIDRs=["10.244.0.0/24"]
	I1026 09:26:26.382439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:26:26.384632       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 09:26:26.388912       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:26:26.394674       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:26:26.398352       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 09:26:26.401723       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:26:26.401840       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:26:26.401910       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-491604"
	I1026 09:26:26.402072       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:26:26.401723       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:26:26.402406       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 09:26:26.402445       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:26:26.402529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 09:26:26.402606       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 09:26:26.402644       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 09:26:26.402841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 09:26:26.405688       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:26:26.405778       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:26:26.407210       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:26:26.407226       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:26:26.409385       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:26:26.411383       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:26:46.438911       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f52c4c739d86ffecd62f7aa94804dffc97885e6599c453ae3291da5bebc13cc6] <==
	I1026 09:26:29.841915       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:26:29.929776       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:26:30.031546       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:26:30.031730       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:26:30.032089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:26:30.101178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:26:30.101241       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:26:30.114947       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:26:30.117361       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:26:30.117415       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:26:30.123753       1 config.go:200] "Starting service config controller"
	I1026 09:26:30.123859       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:26:30.123937       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:26:30.123971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:26:30.124017       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:26:30.124045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:26:30.124515       1 config.go:309] "Starting node config controller"
	I1026 09:26:30.126190       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:26:30.126204       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:26:30.224217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:26:30.224278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:26:30.225691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a65f7da642c801dc5b4586f00528ba13ccaede6c8480f401905aedac36a7acc2] <==
	I1026 09:26:19.091990       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:26:20.355634       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 09:26:20.355784       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 09:26:20.355820       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:26:20.355831       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:26:20.385208       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:26:20.385238       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:26:20.388462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:26:20.388555       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:26:20.390355       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:26:20.391915       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 09:26:20.397358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:26:21.789918       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:26:27 no-preload-491604 kubelet[1999]: I1026 09:26:27.677888    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669ea85d-25d5-4e3e-b4b6-1c86141967f3-xtables-lock\") pod \"kube-proxy-tpv97\" (UID: \"669ea85d-25d5-4e3e-b4b6-1c86141967f3\") " pod="kube-system/kube-proxy-tpv97"
	Oct 26 09:26:27 no-preload-491604 kubelet[1999]: I1026 09:26:27.677958    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/669ea85d-25d5-4e3e-b4b6-1c86141967f3-kube-proxy\") pod \"kube-proxy-tpv97\" (UID: \"669ea85d-25d5-4e3e-b4b6-1c86141967f3\") " pod="kube-system/kube-proxy-tpv97"
	Oct 26 09:26:27 no-preload-491604 kubelet[1999]: I1026 09:26:27.678005    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669ea85d-25d5-4e3e-b4b6-1c86141967f3-lib-modules\") pod \"kube-proxy-tpv97\" (UID: \"669ea85d-25d5-4e3e-b4b6-1c86141967f3\") " pod="kube-system/kube-proxy-tpv97"
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.830960    1999 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.831017    1999 projected.go:196] Error preparing data for projected volume kube-api-access-qtqsm for pod kube-system/kindnet-4g8pl: failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.831143    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c83a24cf-3ae8-42a5-9f26-13ff5989e6ee-kube-api-access-qtqsm podName:c83a24cf-3ae8-42a5-9f26-13ff5989e6ee nodeName:}" failed. No retries permitted until 2025-10-26 09:26:29.331106782 +0000 UTC m=+8.039691309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qtqsm" (UniqueName: "kubernetes.io/projected/c83a24cf-3ae8-42a5-9f26-13ff5989e6ee-kube-api-access-qtqsm") pod "kindnet-4g8pl" (UID: "c83a24cf-3ae8-42a5-9f26-13ff5989e6ee") : failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.908437    1999 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.908496    1999 projected.go:196] Error preparing data for projected volume kube-api-access-c5xgr for pod kube-system/kube-proxy-tpv97: failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:28 no-preload-491604 kubelet[1999]: E1026 09:26:28.908576    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669ea85d-25d5-4e3e-b4b6-1c86141967f3-kube-api-access-c5xgr podName:669ea85d-25d5-4e3e-b4b6-1c86141967f3 nodeName:}" failed. No retries permitted until 2025-10-26 09:26:29.4085551 +0000 UTC m=+8.117139627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c5xgr" (UniqueName: "kubernetes.io/projected/669ea85d-25d5-4e3e-b4b6-1c86141967f3-kube-api-access-c5xgr") pod "kube-proxy-tpv97" (UID: "669ea85d-25d5-4e3e-b4b6-1c86141967f3") : failed to sync configmap cache: timed out waiting for the condition
	Oct 26 09:26:29 no-preload-491604 kubelet[1999]: I1026 09:26:29.399570    1999 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:26:29 no-preload-491604 kubelet[1999]: W1026 09:26:29.688426    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-cce50c88172dbc0cc373a03bd521229feb7f46613aef928ea356ae4826790910 WatchSource:0}: Error finding container cce50c88172dbc0cc373a03bd521229feb7f46613aef928ea356ae4826790910: Status 404 returned error can't find the container with id cce50c88172dbc0cc373a03bd521229feb7f46613aef928ea356ae4826790910
	Oct 26 09:26:29 no-preload-491604 kubelet[1999]: W1026 09:26:29.744309    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-4dc52398cd13da2712625e167e90bf1abe2e078570fdf7ce1be99182913cc1eb WatchSource:0}: Error finding container 4dc52398cd13da2712625e167e90bf1abe2e078570fdf7ce1be99182913cc1eb: Status 404 returned error can't find the container with id 4dc52398cd13da2712625e167e90bf1abe2e078570fdf7ce1be99182913cc1eb
	Oct 26 09:26:31 no-preload-491604 kubelet[1999]: I1026 09:26:31.499681    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tpv97" podStartSLOduration=4.499662059 podStartE2EDuration="4.499662059s" podCreationTimestamp="2025-10-26 09:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:26:30.605363375 +0000 UTC m=+9.313947918" watchObservedRunningTime="2025-10-26 09:26:31.499662059 +0000 UTC m=+10.208246594"
	Oct 26 09:26:32 no-preload-491604 kubelet[1999]: I1026 09:26:32.610263    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4g8pl" podStartSLOduration=2.96788834 podStartE2EDuration="5.610236508s" podCreationTimestamp="2025-10-26 09:26:27 +0000 UTC" firstStartedPulling="2025-10-26 09:26:29.690439423 +0000 UTC m=+8.399023950" lastFinishedPulling="2025-10-26 09:26:32.332787583 +0000 UTC m=+11.041372118" observedRunningTime="2025-10-26 09:26:32.609984443 +0000 UTC m=+11.318568986" watchObservedRunningTime="2025-10-26 09:26:32.610236508 +0000 UTC m=+11.318821051"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: I1026 09:26:43.306268    1999 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: I1026 09:26:43.417755    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5a9e6e7-af2c-4731-bedc-f98677818988-tmp\") pod \"storage-provisioner\" (UID: \"a5a9e6e7-af2c-4731-bedc-f98677818988\") " pod="kube-system/storage-provisioner"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: I1026 09:26:43.417803    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdv5\" (UniqueName: \"kubernetes.io/projected/a5a9e6e7-af2c-4731-bedc-f98677818988-kube-api-access-hgdv5\") pod \"storage-provisioner\" (UID: \"a5a9e6e7-af2c-4731-bedc-f98677818988\") " pod="kube-system/storage-provisioner"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: I1026 09:26:43.417826    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b400112c-40a5-4ef6-82d5-b4533cb6e4ca-config-volume\") pod \"coredns-66bc5c9577-2rq75\" (UID: \"b400112c-40a5-4ef6-82d5-b4533cb6e4ca\") " pod="kube-system/coredns-66bc5c9577-2rq75"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: I1026 09:26:43.417846    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsr7r\" (UniqueName: \"kubernetes.io/projected/b400112c-40a5-4ef6-82d5-b4533cb6e4ca-kube-api-access-lsr7r\") pod \"coredns-66bc5c9577-2rq75\" (UID: \"b400112c-40a5-4ef6-82d5-b4533cb6e4ca\") " pod="kube-system/coredns-66bc5c9577-2rq75"
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: W1026 09:26:43.685008    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-3b99acb04fdf7a0e68c93697518b8721f93c242bc200f5e3683ac53807ae0634 WatchSource:0}: Error finding container 3b99acb04fdf7a0e68c93697518b8721f93c242bc200f5e3683ac53807ae0634: Status 404 returned error can't find the container with id 3b99acb04fdf7a0e68c93697518b8721f93c242bc200f5e3683ac53807ae0634
	Oct 26 09:26:43 no-preload-491604 kubelet[1999]: W1026 09:26:43.689743    1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-6aa1e4eab4aa84634020b99e14a5fa397ebaa91b9035d510235fe0aa1c224329 WatchSource:0}: Error finding container 6aa1e4eab4aa84634020b99e14a5fa397ebaa91b9035d510235fe0aa1c224329: Status 404 returned error can't find the container with id 6aa1e4eab4aa84634020b99e14a5fa397ebaa91b9035d510235fe0aa1c224329
	Oct 26 09:26:44 no-preload-491604 kubelet[1999]: I1026 09:26:44.647855    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.647835811 podStartE2EDuration="16.647835811s" podCreationTimestamp="2025-10-26 09:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:26:44.634040218 +0000 UTC m=+23.342624745" watchObservedRunningTime="2025-10-26 09:26:44.647835811 +0000 UTC m=+23.356420337"
	Oct 26 09:26:47 no-preload-491604 kubelet[1999]: I1026 09:26:47.078484    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2rq75" podStartSLOduration=20.078464763 podStartE2EDuration="20.078464763s" podCreationTimestamp="2025-10-26 09:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:26:44.64958931 +0000 UTC m=+23.358173845" watchObservedRunningTime="2025-10-26 09:26:47.078464763 +0000 UTC m=+25.787049290"
	Oct 26 09:26:47 no-preload-491604 kubelet[1999]: I1026 09:26:47.148603    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6sq\" (UniqueName: \"kubernetes.io/projected/9e3cede7-8f2e-49cf-bdc2-b16fe5818763-kube-api-access-9v6sq\") pod \"busybox\" (UID: \"9e3cede7-8f2e-49cf-bdc2-b16fe5818763\") " pod="default/busybox"
	Oct 26 09:26:56 no-preload-491604 kubelet[1999]: E1026 09:26:56.253003    1999 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42046->127.0.0.1:35395: write tcp 127.0.0.1:42046->127.0.0.1:35395: write: broken pipe
	
	
	==> storage-provisioner [b76225f1bf50a550aa99c689a4c66c49fc855876b486c4ca2c0e1e357b1663d6] <==
	I1026 09:26:43.796084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:26:43.825803       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:26:43.825922       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:26:43.828284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:43.833959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:26:43.834195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:26:43.834428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-491604_3ba28982-c265-44f7-aa83-0ab6c1356f8f!
	I1026 09:26:43.835487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53d2d1f5-98c5-4ce4-af2a-3a4b0bc16b41", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-491604_3ba28982-c265-44f7-aa83-0ab6c1356f8f became leader
	W1026 09:26:43.840931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:43.847134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:26:43.934795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-491604_3ba28982-c265-44f7-aa83-0ab6c1356f8f!
	W1026 09:26:45.850693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:45.855331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:47.858669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:47.866686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:49.869676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:49.876391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:51.880569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:51.885087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:53.888437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:53.893097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:55.896224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:55.900652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:57.906059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:26:57.920851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-491604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-204381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-204381 --alsologtostderr -v=1: exit status 80 (1.76456454s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-204381 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:27:52.336355  507655 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:27:52.336540  507655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:52.336553  507655 out.go:374] Setting ErrFile to fd 2...
	I1026 09:27:52.336558  507655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:52.336820  507655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:27:52.337149  507655 out.go:368] Setting JSON to false
	I1026 09:27:52.337179  507655 mustload.go:65] Loading cluster: embed-certs-204381
	I1026 09:27:52.337567  507655 config.go:182] Loaded profile config "embed-certs-204381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:52.338125  507655 cli_runner.go:164] Run: docker container inspect embed-certs-204381 --format={{.State.Status}}
	I1026 09:27:52.356538  507655 host.go:66] Checking if "embed-certs-204381" exists ...
	I1026 09:27:52.356867  507655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:27:52.424696  507655 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:27:52.414564471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:27:52.425405  507655 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-204381 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:27:52.428761  507655 out.go:179] * Pausing node embed-certs-204381 ... 
	I1026 09:27:52.431774  507655 host.go:66] Checking if "embed-certs-204381" exists ...
	I1026 09:27:52.432115  507655 ssh_runner.go:195] Run: systemctl --version
	I1026 09:27:52.432173  507655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-204381
	I1026 09:27:52.454851  507655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/embed-certs-204381/id_rsa Username:docker}
	I1026 09:27:52.565296  507655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:52.578511  507655 pause.go:52] kubelet running: true
	I1026 09:27:52.578599  507655 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:27:52.842569  507655 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:27:52.842660  507655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:27:52.911430  507655 cri.go:89] found id: "9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4"
	I1026 09:27:52.911452  507655 cri.go:89] found id: "e7f9e9902aa925c7efff9d04c9c478d1ff7cb3c07814d288d0998109c3d5d770"
	I1026 09:27:52.911458  507655 cri.go:89] found id: "1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247"
	I1026 09:27:52.911462  507655 cri.go:89] found id: "c589218ba65cfb5f8058e769abaec08033e797aa266d515739ceea95a26adbb3"
	I1026 09:27:52.911465  507655 cri.go:89] found id: "7ab5b4e25d7a54fac31b2dca5a6e398e10f0bbd81c9e4e4407ddd084251219b7"
	I1026 09:27:52.911470  507655 cri.go:89] found id: "fc645c6e07eb52dcf2d2a8c865d46ef41d8fb8a4a5bf76c369270785a3bb0d6e"
	I1026 09:27:52.911482  507655 cri.go:89] found id: "d4d7f74617d8d427b2faab1b3c5e48bbbae37682e6b48f8e1d3141a76e4a4b45"
	I1026 09:27:52.911486  507655 cri.go:89] found id: "c4cef10c093e047656711f6ddd43f45e451b4234b38559cf8799fd096a53eda3"
	I1026 09:27:52.911490  507655 cri.go:89] found id: "c9cbd9d3e4cfa1cf00ca6b7ab613ad7c0bbc25320fa33f24966b346c5cfee930"
	I1026 09:27:52.911496  507655 cri.go:89] found id: "0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	I1026 09:27:52.911499  507655 cri.go:89] found id: "ac441bded3b54ada4b84416b407f28fd84714df732c42deea0ac4709a5553635"
	I1026 09:27:52.911503  507655 cri.go:89] found id: ""
	I1026 09:27:52.911553  507655 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:27:52.922653  507655 retry.go:31] will retry after 162.901197ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:27:53.086094  507655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:53.099425  507655 pause.go:52] kubelet running: false
	I1026 09:27:53.099517  507655 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:27:53.274694  507655 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:27:53.274943  507655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:27:53.386368  507655 cri.go:89] found id: "9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4"
	I1026 09:27:53.386445  507655 cri.go:89] found id: "e7f9e9902aa925c7efff9d04c9c478d1ff7cb3c07814d288d0998109c3d5d770"
	I1026 09:27:53.386475  507655 cri.go:89] found id: "1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247"
	I1026 09:27:53.386494  507655 cri.go:89] found id: "c589218ba65cfb5f8058e769abaec08033e797aa266d515739ceea95a26adbb3"
	I1026 09:27:53.386526  507655 cri.go:89] found id: "7ab5b4e25d7a54fac31b2dca5a6e398e10f0bbd81c9e4e4407ddd084251219b7"
	I1026 09:27:53.386552  507655 cri.go:89] found id: "fc645c6e07eb52dcf2d2a8c865d46ef41d8fb8a4a5bf76c369270785a3bb0d6e"
	I1026 09:27:53.386571  507655 cri.go:89] found id: "d4d7f74617d8d427b2faab1b3c5e48bbbae37682e6b48f8e1d3141a76e4a4b45"
	I1026 09:27:53.386588  507655 cri.go:89] found id: "c4cef10c093e047656711f6ddd43f45e451b4234b38559cf8799fd096a53eda3"
	I1026 09:27:53.386623  507655 cri.go:89] found id: "c9cbd9d3e4cfa1cf00ca6b7ab613ad7c0bbc25320fa33f24966b346c5cfee930"
	I1026 09:27:53.386644  507655 cri.go:89] found id: "0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	I1026 09:27:53.386663  507655 cri.go:89] found id: "ac441bded3b54ada4b84416b407f28fd84714df732c42deea0ac4709a5553635"
	I1026 09:27:53.386693  507655 cri.go:89] found id: ""
	I1026 09:27:53.386802  507655 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:27:53.403979  507655 retry.go:31] will retry after 316.787269ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:53Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:27:53.721610  507655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:53.736212  507655 pause.go:52] kubelet running: false
	I1026 09:27:53.736315  507655 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:27:53.913641  507655 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:27:53.913748  507655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:27:53.997206  507655 cri.go:89] found id: "9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4"
	I1026 09:27:53.997286  507655 cri.go:89] found id: "e7f9e9902aa925c7efff9d04c9c478d1ff7cb3c07814d288d0998109c3d5d770"
	I1026 09:27:53.997306  507655 cri.go:89] found id: "1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247"
	I1026 09:27:53.997328  507655 cri.go:89] found id: "c589218ba65cfb5f8058e769abaec08033e797aa266d515739ceea95a26adbb3"
	I1026 09:27:53.997357  507655 cri.go:89] found id: "7ab5b4e25d7a54fac31b2dca5a6e398e10f0bbd81c9e4e4407ddd084251219b7"
	I1026 09:27:53.997383  507655 cri.go:89] found id: "fc645c6e07eb52dcf2d2a8c865d46ef41d8fb8a4a5bf76c369270785a3bb0d6e"
	I1026 09:27:53.997403  507655 cri.go:89] found id: "d4d7f74617d8d427b2faab1b3c5e48bbbae37682e6b48f8e1d3141a76e4a4b45"
	I1026 09:27:53.997439  507655 cri.go:89] found id: "c4cef10c093e047656711f6ddd43f45e451b4234b38559cf8799fd096a53eda3"
	I1026 09:27:53.997465  507655 cri.go:89] found id: "c9cbd9d3e4cfa1cf00ca6b7ab613ad7c0bbc25320fa33f24966b346c5cfee930"
	I1026 09:27:53.997487  507655 cri.go:89] found id: "0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	I1026 09:27:53.997507  507655 cri.go:89] found id: "ac441bded3b54ada4b84416b407f28fd84714df732c42deea0ac4709a5553635"
	I1026 09:27:53.997527  507655 cri.go:89] found id: ""
	I1026 09:27:53.997600  507655 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:27:54.015895  507655 out.go:203] 
	W1026 09:27:54.018959  507655 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:27:54.018999  507655 out.go:285] * 
	* 
	W1026 09:27:54.026455  507655 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:27:54.029855  507655 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-204381 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-204381
helpers_test.go:243: (dbg) docker inspect embed-certs-204381:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	        "Created": "2025-10-26T09:25:07.035838779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:26:52.60501554Z",
	            "FinishedAt": "2025-10-26T09:26:51.730765633Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hosts",
	        "LogPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab-json.log",
	        "Name": "/embed-certs-204381",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-204381:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-204381",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	                "LowerDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-204381",
	                "Source": "/var/lib/docker/volumes/embed-certs-204381/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-204381",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-204381",
	                "name.minikube.sigs.k8s.io": "embed-certs-204381",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7123c2cf08f742ea30613fd80c9cbc0dad21352d36b2bb7b63ee3645a0f36ac1",
	            "SandboxKey": "/var/run/docker/netns/7123c2cf08f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-204381": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:1c:84:ea:88:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33c235a08e203b1c326fabab7473b4ca038ba835f19a85fcec21303edd44d5d4",
	                    "EndpointID": "cf750c0573da88cd0bba555a484b3cb6149345724d492672074c17a4acd43486",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-204381",
	                        "fbf6b6fb12ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381: exit status 2 (357.070309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25: (1.382694081s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                          │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                    │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                          │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                          │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                          │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                             │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                              │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                              │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                             │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:27:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:27:11.981589  505287 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:27:11.981791  505287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:11.981814  505287 out.go:374] Setting ErrFile to fd 2...
	I1026 09:27:11.981841  505287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:11.982119  505287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:27:11.982539  505287 out.go:368] Setting JSON to false
	I1026 09:27:11.983584  505287 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11382,"bootTime":1761459450,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:27:11.983756  505287 start.go:141] virtualization:  
	I1026 09:27:11.989842  505287 out.go:179] * [no-preload-491604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:27:11.993308  505287 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:27:11.993382  505287 notify.go:220] Checking for updates...
	I1026 09:27:12.000313  505287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:27:12.004221  505287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:12.008072  505287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:27:12.011333  505287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:27:12.014526  505287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:27:12.018246  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:12.018890  505287 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:27:12.053228  505287 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:27:12.053360  505287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:27:12.144737  505287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:27:12.133709998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:27:12.144866  505287 docker.go:318] overlay module found
	I1026 09:27:12.148345  505287 out.go:179] * Using the docker driver based on existing profile
	I1026 09:27:12.151776  505287 start.go:305] selected driver: docker
	I1026 09:27:12.151804  505287 start.go:925] validating driver "docker" against &{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:12.151947  505287 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:27:12.152658  505287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:27:12.265196  505287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:27:12.252598908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:27:12.265555  505287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:12.265592  505287 cni.go:84] Creating CNI manager for ""
	I1026 09:27:12.265650  505287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:27:12.265695  505287 start.go:349] cluster config:
	{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:12.270500  505287 out.go:179] * Starting "no-preload-491604" primary control-plane node in "no-preload-491604" cluster
	I1026 09:27:12.274069  505287 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:27:12.277572  505287 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:27:07.406578  502650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:27:07.407708  502650 api_server.go:141] control plane version: v1.34.1
	I1026 09:27:07.407741  502650 api_server.go:131] duration metric: took 9.933947ms to wait for apiserver health ...
	I1026 09:27:07.407751  502650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:27:07.407939  502650 addons.go:514] duration metric: took 5.575307832s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 09:27:07.412123  502650 system_pods.go:59] 8 kube-system pods found
	I1026 09:27:07.412170  502650 system_pods.go:61] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:07.412198  502650 system_pods.go:61] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:07.412212  502650 system_pods.go:61] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:27:07.412239  502650 system_pods.go:61] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:07.412254  502650 system_pods.go:61] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:07.412270  502650 system_pods.go:61] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:27:07.412286  502650 system_pods.go:61] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:07.412292  502650 system_pods.go:61] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Running
	I1026 09:27:07.412317  502650 system_pods.go:74] duration metric: took 4.557762ms to wait for pod list to return data ...
	I1026 09:27:07.412332  502650 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:27:07.415742  502650 default_sa.go:45] found service account: "default"
	I1026 09:27:07.415777  502650 default_sa.go:55] duration metric: took 3.430954ms for default service account to be created ...
	I1026 09:27:07.415787  502650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:27:07.418917  502650 system_pods.go:86] 8 kube-system pods found
	I1026 09:27:07.418955  502650 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:07.418965  502650 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:07.418971  502650 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:27:07.418980  502650 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:07.418987  502650 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:07.418996  502650 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:27:07.419005  502650 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:07.419014  502650 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Running
	I1026 09:27:07.419021  502650 system_pods.go:126] duration metric: took 3.229293ms to wait for k8s-apps to be running ...
	I1026 09:27:07.419034  502650 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:27:07.419088  502650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:07.435852  502650 system_svc.go:56] duration metric: took 16.806382ms WaitForService to wait for kubelet
	I1026 09:27:07.435883  502650 kubeadm.go:586] duration metric: took 5.603601124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:07.435901  502650 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:27:07.440622  502650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:27:07.440656  502650 node_conditions.go:123] node cpu capacity is 2
	I1026 09:27:07.440670  502650 node_conditions.go:105] duration metric: took 4.729867ms to run NodePressure ...
	I1026 09:27:07.440683  502650 start.go:241] waiting for startup goroutines ...
	I1026 09:27:07.440691  502650 start.go:246] waiting for cluster config update ...
	I1026 09:27:07.440702  502650 start.go:255] writing updated cluster config ...
	I1026 09:27:07.440994  502650 ssh_runner.go:195] Run: rm -f paused
	I1026 09:27:07.445367  502650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:07.449664  502650 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:27:09.459980  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:11.957007  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:12.280837  505287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:27:12.280996  505287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:27:12.281304  505287 cache.go:107] acquiring lock: {Name:mkdad500968e7139280738b23aa2f2a019253f5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281382  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 09:27:12.281390  505287 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.06µs
	I1026 09:27:12.281398  505287 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 09:27:12.281410  505287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:27:12.281606  505287 cache.go:107] acquiring lock: {Name:mk599bfcacc3fab2a4670e80f471bbbcaed32bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281658  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 09:27:12.281666  505287 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 67.168µs
	I1026 09:27:12.281673  505287 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 09:27:12.281684  505287 cache.go:107] acquiring lock: {Name:mk14bfa53cd66a6ca87d606642a3cbb2da8dfbc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281713  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 09:27:12.281718  505287 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.759µs
	I1026 09:27:12.281724  505287 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 09:27:12.281734  505287 cache.go:107] acquiring lock: {Name:mk7d0c8b8f0317e07f3637091202b09c4c80488b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281761  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 09:27:12.281766  505287 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.624µs
	I1026 09:27:12.281772  505287 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 09:27:12.281783  505287 cache.go:107] acquiring lock: {Name:mk38cdae88a1b6a128486f22f7bf9cbf423409f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281808  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 09:27:12.281813  505287 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.966µs
	I1026 09:27:12.281819  505287 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 09:27:12.281830  505287 cache.go:107] acquiring lock: {Name:mk1911c569c908e58b6e7e7f80fbc6513309fcca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281855  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 09:27:12.281861  505287 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.541µs
	I1026 09:27:12.281866  505287 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 09:27:12.281875  505287 cache.go:107] acquiring lock: {Name:mkec65762826ae78f9cb76c49217646d15db3a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281900  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 09:27:12.281905  505287 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.024µs
	I1026 09:27:12.281923  505287 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 09:27:12.281934  505287 cache.go:107] acquiring lock: {Name:mk439f753472c6d4dacbd31dbea66f1a2f133a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281961  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 09:27:12.281966  505287 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.453µs
	I1026 09:27:12.281972  505287 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 09:27:12.281978  505287 cache.go:87] Successfully saved all images to host disk.
	I1026 09:27:12.304680  505287 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:27:12.304709  505287 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:27:12.304725  505287 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:27:12.304761  505287 start.go:360] acquireMachinesLock for no-preload-491604: {Name:mkc6d58300c0451128c3270d72a7123ff4bec2e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.304819  505287 start.go:364] duration metric: took 37.375µs to acquireMachinesLock for "no-preload-491604"
	I1026 09:27:12.304845  505287 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:27:12.304858  505287 fix.go:54] fixHost starting: 
	I1026 09:27:12.305122  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:12.324470  505287 fix.go:112] recreateIfNeeded on no-preload-491604: state=Stopped err=<nil>
	W1026 09:27:12.324500  505287 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:27:12.327778  505287 out.go:252] * Restarting existing docker container for "no-preload-491604" ...
	I1026 09:27:12.327862  505287 cli_runner.go:164] Run: docker start no-preload-491604
	I1026 09:27:12.680145  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:12.726526  505287 kic.go:430] container "no-preload-491604" state is running.
	I1026 09:27:12.726983  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:12.757263  505287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:27:12.757486  505287 machine.go:93] provisionDockerMachine start ...
	I1026 09:27:12.757544  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:12.782421  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:12.782796  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:12.782808  505287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:27:12.783546  505287 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38080->127.0.0.1:33455: read: connection reset by peer
	I1026 09:27:15.946522  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:27:15.946556  505287 ubuntu.go:182] provisioning hostname "no-preload-491604"
	I1026 09:27:15.946617  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:15.969504  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:15.969859  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:15.969890  505287 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-491604 && echo "no-preload-491604" | sudo tee /etc/hostname
	I1026 09:27:16.152999  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:27:16.153078  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:16.176825  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:16.177138  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:16.177159  505287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491604/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:27:16.353124  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:27:16.353158  505287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:27:16.353179  505287 ubuntu.go:190] setting up certificates
	I1026 09:27:16.353190  505287 provision.go:84] configureAuth start
	I1026 09:27:16.353249  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:16.384420  505287 provision.go:143] copyHostCerts
	I1026 09:27:16.384488  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:27:16.384512  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:27:16.384586  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:27:16.384699  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:27:16.384710  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:27:16.384737  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:27:16.384800  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:27:16.384808  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:27:16.384832  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:27:16.384892  505287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.no-preload-491604 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-491604]
	W1026 09:27:14.458346  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:16.956480  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:17.512825  505287 provision.go:177] copyRemoteCerts
	I1026 09:27:17.512897  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:27:17.512957  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:17.555177  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:17.664670  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:27:17.686635  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:27:17.710074  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:27:17.733478  505287 provision.go:87] duration metric: took 1.380270859s to configureAuth
	I1026 09:27:17.733505  505287 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:27:17.733717  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:17.733839  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:17.753772  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:17.754098  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:17.754119  505287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:27:18.119512  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:27:18.119538  505287 machine.go:96] duration metric: took 5.362043072s to provisionDockerMachine
	I1026 09:27:18.119549  505287 start.go:293] postStartSetup for "no-preload-491604" (driver="docker")
	I1026 09:27:18.119562  505287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:27:18.119622  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:27:18.119681  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.143932  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.275532  505287 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:27:18.279342  505287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:27:18.279373  505287 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:27:18.279385  505287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:27:18.279450  505287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:27:18.279530  505287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:27:18.279633  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:27:18.295639  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:27:18.325846  505287 start.go:296] duration metric: took 206.281238ms for postStartSetup
	I1026 09:27:18.325939  505287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:27:18.326001  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.356161  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.464626  505287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:27:18.472069  505287 fix.go:56] duration metric: took 6.167204169s for fixHost
	I1026 09:27:18.472102  505287 start.go:83] releasing machines lock for "no-preload-491604", held for 6.167270057s
	I1026 09:27:18.472175  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:18.495188  505287 ssh_runner.go:195] Run: cat /version.json
	I1026 09:27:18.495229  505287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:27:18.495237  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.495297  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.525912  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.533438  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.772626  505287 ssh_runner.go:195] Run: systemctl --version
	I1026 09:27:18.781746  505287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:27:18.863320  505287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:27:18.873116  505287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:27:18.873190  505287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:27:18.891743  505287 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:27:18.891768  505287 start.go:495] detecting cgroup driver to use...
	I1026 09:27:18.891799  505287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:27:18.891868  505287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:27:18.922197  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:27:18.946894  505287 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:27:18.947011  505287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:27:18.966018  505287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:27:18.985087  505287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:27:19.198821  505287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:27:19.371674  505287 docker.go:234] disabling docker service ...
	I1026 09:27:19.371800  505287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:27:19.389129  505287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:27:19.403143  505287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:27:19.556296  505287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:27:19.742206  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:27:19.764702  505287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:27:19.782733  505287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:27:19.782847  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.792907  505287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:27:19.793063  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.802133  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.811776  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.821238  505287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:27:19.835257  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.847629  505287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.857425  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.867126  505287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:27:19.876883  505287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:27:19.884290  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:20.052046  505287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:27:20.342258  505287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:27:20.342331  505287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:27:20.347395  505287 start.go:563] Will wait 60s for crictl version
	I1026 09:27:20.347453  505287 ssh_runner.go:195] Run: which crictl
	I1026 09:27:20.352046  505287 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:27:20.390395  505287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:27:20.390498  505287 ssh_runner.go:195] Run: crio --version
	I1026 09:27:20.427956  505287 ssh_runner.go:195] Run: crio --version
	I1026 09:27:20.479619  505287 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:27:20.482515  505287 cli_runner.go:164] Run: docker network inspect no-preload-491604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:27:20.498530  505287 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:27:20.504927  505287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:27:20.520134  505287 kubeadm.go:883] updating cluster {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:27:20.520243  505287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:27:20.520285  505287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:27:20.582084  505287 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:27:20.582109  505287 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:27:20.582121  505287 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:27:20.582211  505287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:27:20.582295  505287 ssh_runner.go:195] Run: crio config
	I1026 09:27:20.661497  505287 cni.go:84] Creating CNI manager for ""
	I1026 09:27:20.661522  505287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:27:20.661538  505287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:27:20.661562  505287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491604 NodeName:no-preload-491604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:27:20.661700  505287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:27:20.661771  505287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:27:20.673520  505287 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:27:20.673597  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:27:20.682955  505287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:27:20.698821  505287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:27:20.714746  505287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 09:27:20.729551  505287 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:27:20.734324  505287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:27:20.746112  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:20.931367  505287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:27:20.957410  505287 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604 for IP: 192.168.85.2
	I1026 09:27:20.957430  505287 certs.go:195] generating shared ca certs ...
	I1026 09:27:20.957446  505287 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:20.957583  505287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:27:20.957641  505287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:27:20.957649  505287 certs.go:257] generating profile certs ...
	I1026 09:27:20.957727  505287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.key
	I1026 09:27:20.957792  505287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19
	I1026 09:27:20.957827  505287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key
	I1026 09:27:20.957932  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:27:20.957976  505287 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:27:20.957990  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:27:20.958015  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:27:20.958042  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:27:20.958074  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:27:20.958124  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:27:20.958885  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:27:21.009459  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:27:21.048345  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:27:21.083281  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:27:21.132351  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:27:21.167831  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:27:21.200102  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:27:21.255442  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:27:21.308432  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:27:21.341784  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:27:21.398967  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:27:21.426463  505287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:27:21.444253  505287 ssh_runner.go:195] Run: openssl version
	I1026 09:27:21.457548  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:27:21.470220  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.474984  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.475103  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.521627  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:27:21.532697  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:27:21.544290  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.548633  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.548702  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.599415  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:27:21.608103  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:27:21.617117  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.621907  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.622031  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.674938  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:27:21.684609  505287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:27:21.689447  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:27:21.733532  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:27:21.779459  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:27:21.871089  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:27:21.915428  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:27:22.017028  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:27:22.096794  505287 kubeadm.go:400] StartCluster: {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:22.096933  505287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:27:22.097043  505287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:27:22.192475  505287 cri.go:89] found id: "4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e"
	I1026 09:27:22.192556  505287 cri.go:89] found id: "69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b"
	I1026 09:27:22.192575  505287 cri.go:89] found id: "4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81"
	I1026 09:27:22.192595  505287 cri.go:89] found id: ""
	I1026 09:27:22.192678  505287 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:27:22.244998  505287 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:22Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:27:22.245162  505287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:27:22.254644  505287 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:27:22.254802  505287 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:27:22.254905  505287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:27:22.269849  505287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:27:22.270903  505287 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-491604" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:22.271558  505287 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-491604" cluster setting kubeconfig missing "no-preload-491604" context setting]
	I1026 09:27:22.272655  505287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.274687  505287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:27:22.287302  505287 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:27:22.287334  505287 kubeadm.go:601] duration metric: took 32.502923ms to restartPrimaryControlPlane
	I1026 09:27:22.287342  505287 kubeadm.go:402] duration metric: took 190.560082ms to StartCluster
	I1026 09:27:22.287356  505287 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.287424  505287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:22.292542  505287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.292933  505287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:27:22.293373  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:22.293470  505287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:27:22.293657  505287 addons.go:69] Setting storage-provisioner=true in profile "no-preload-491604"
	I1026 09:27:22.293689  505287 addons.go:238] Setting addon storage-provisioner=true in "no-preload-491604"
	W1026 09:27:22.293723  505287 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:27:22.293780  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.293847  505287 addons.go:69] Setting dashboard=true in profile "no-preload-491604"
	I1026 09:27:22.299790  505287 addons.go:238] Setting addon dashboard=true in "no-preload-491604"
	W1026 09:27:22.299820  505287 addons.go:247] addon dashboard should already be in state true
	I1026 09:27:22.299882  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.300250  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.300423  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.293967  505287 addons.go:69] Setting default-storageclass=true in profile "no-preload-491604"
	I1026 09:27:22.301681  505287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491604"
	I1026 09:27:22.297195  505287 out.go:179] * Verifying Kubernetes components...
	W1026 09:27:18.959244  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:20.963467  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:22.302180  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.307579  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:22.364151  505287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:27:22.364277  505287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:27:22.367944  505287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:27:22.367968  505287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:27:22.368035  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.373837  505287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:27:22.375582  505287 addons.go:238] Setting addon default-storageclass=true in "no-preload-491604"
	W1026 09:27:22.375600  505287 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:27:22.375674  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.376222  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.378986  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:27:22.379011  505287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:27:22.379077  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.423991  505287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:27:22.424012  505287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:27:22.424076  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.427608  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.446051  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.467180  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.702306  505287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:27:22.703147  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:27:22.725152  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:27:22.743811  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:27:22.743882  505287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:27:22.774138  505287 node_ready.go:35] waiting up to 6m0s for node "no-preload-491604" to be "Ready" ...
	I1026 09:27:22.814654  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:27:22.814680  505287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:27:22.930573  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:27:22.930599  505287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:27:23.003333  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:27:23.003363  505287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:27:23.022660  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:27:23.022686  505287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:27:23.050537  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:27:23.050608  505287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:27:23.072596  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:27:23.072623  505287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:27:23.096583  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:27:23.096608  505287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:27:23.110764  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:27:23.110796  505287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:27:23.127841  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 09:27:23.455947  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:25.456482  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:27.300383  505287 node_ready.go:49] node "no-preload-491604" is "Ready"
	I1026 09:27:27.300414  505287 node_ready.go:38] duration metric: took 4.526183715s for node "no-preload-491604" to be "Ready" ...
	I1026 09:27:27.300430  505287 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:27:27.300496  505287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:27:27.692909  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.989513004s)
	I1026 09:27:28.748603  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.023367802s)
	I1026 09:27:29.098069  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.970182183s)
	I1026 09:27:29.098284  505287 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.797769532s)
	I1026 09:27:29.098308  505287 api_server.go:72] duration metric: took 6.805309707s to wait for apiserver process to appear ...
	I1026 09:27:29.098315  505287 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:27:29.098333  505287 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:27:29.101313  505287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-491604 addons enable metrics-server
	
	I1026 09:27:29.104251  505287 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 09:27:29.107178  505287 addons.go:514] duration metric: took 6.813676535s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 09:27:29.115850  505287 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:27:29.115885  505287 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:27:29.598461  505287 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:27:29.609011  505287 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 09:27:29.611081  505287 api_server.go:141] control plane version: v1.34.1
	I1026 09:27:29.611147  505287 api_server.go:131] duration metric: took 512.825828ms to wait for apiserver health ...
	I1026 09:27:29.611183  505287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:27:29.614867  505287 system_pods.go:59] 8 kube-system pods found
	I1026 09:27:29.614946  505287 system_pods.go:61] "coredns-66bc5c9577-2rq75" [b400112c-40a5-4ef6-82d5-b4533cb6e4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:29.614971  505287 system_pods.go:61] "etcd-no-preload-491604" [dfad6de6-c15a-4fc5-b549-b2fee23d4c8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:29.615006  505287 system_pods.go:61] "kindnet-4g8pl" [c83a24cf-3ae8-42a5-9f26-13ff5989e6ee] Running
	I1026 09:27:29.615032  505287 system_pods.go:61] "kube-apiserver-no-preload-491604" [78d09308-6568-4c2b-8264-06e86e844c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:29.615054  505287 system_pods.go:61] "kube-controller-manager-no-preload-491604" [1d362bb3-9059-461c-a691-5a5c8404168b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:29.615075  505287 system_pods.go:61] "kube-proxy-tpv97" [669ea85d-25d5-4e3e-b4b6-1c86141967f3] Running
	I1026 09:27:29.615109  505287 system_pods.go:61] "kube-scheduler-no-preload-491604" [44ecdc58-d295-4e3d-a881-4933cc93233f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:29.615133  505287 system_pods.go:61] "storage-provisioner" [a5a9e6e7-af2c-4731-bedc-f98677818988] Running
	I1026 09:27:29.615155  505287 system_pods.go:74] duration metric: took 3.952544ms to wait for pod list to return data ...
	I1026 09:27:29.615174  505287 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:27:29.658804  505287 default_sa.go:45] found service account: "default"
	I1026 09:27:29.658875  505287 default_sa.go:55] duration metric: took 43.679659ms for default service account to be created ...
	I1026 09:27:29.658899  505287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:27:29.662849  505287 system_pods.go:86] 8 kube-system pods found
	I1026 09:27:29.662936  505287 system_pods.go:89] "coredns-66bc5c9577-2rq75" [b400112c-40a5-4ef6-82d5-b4533cb6e4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:29.662967  505287 system_pods.go:89] "etcd-no-preload-491604" [dfad6de6-c15a-4fc5-b549-b2fee23d4c8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:29.663003  505287 system_pods.go:89] "kindnet-4g8pl" [c83a24cf-3ae8-42a5-9f26-13ff5989e6ee] Running
	I1026 09:27:29.663035  505287 system_pods.go:89] "kube-apiserver-no-preload-491604" [78d09308-6568-4c2b-8264-06e86e844c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:29.663058  505287 system_pods.go:89] "kube-controller-manager-no-preload-491604" [1d362bb3-9059-461c-a691-5a5c8404168b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:29.663079  505287 system_pods.go:89] "kube-proxy-tpv97" [669ea85d-25d5-4e3e-b4b6-1c86141967f3] Running
	I1026 09:27:29.663115  505287 system_pods.go:89] "kube-scheduler-no-preload-491604" [44ecdc58-d295-4e3d-a881-4933cc93233f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:29.663140  505287 system_pods.go:89] "storage-provisioner" [a5a9e6e7-af2c-4731-bedc-f98677818988] Running
	I1026 09:27:29.663166  505287 system_pods.go:126] duration metric: took 4.246045ms to wait for k8s-apps to be running ...
	I1026 09:27:29.663186  505287 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:27:29.663277  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:29.678620  505287 system_svc.go:56] duration metric: took 15.42457ms WaitForService to wait for kubelet
	I1026 09:27:29.678693  505287 kubeadm.go:586] duration metric: took 7.38569256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:29.678805  505287 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:27:29.681544  505287 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:27:29.681578  505287 node_conditions.go:123] node cpu capacity is 2
	I1026 09:27:29.681591  505287 node_conditions.go:105] duration metric: took 2.767421ms to run NodePressure ...
	I1026 09:27:29.681622  505287 start.go:241] waiting for startup goroutines ...
	I1026 09:27:29.681636  505287 start.go:246] waiting for cluster config update ...
	I1026 09:27:29.681648  505287 start.go:255] writing updated cluster config ...
	I1026 09:27:29.681956  505287 ssh_runner.go:195] Run: rm -f paused
	I1026 09:27:29.686663  505287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:29.690099  505287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rq75" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:27:31.695875  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:27.956352  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:30.454839  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:33.696999  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:36.195381  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:32.455221  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:34.956700  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:37.470285  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:38.957854  502650 pod_ready.go:94] pod "coredns-66bc5c9577-r7mm4" is "Ready"
	I1026 09:27:38.957889  502650 pod_ready.go:86] duration metric: took 31.508199172s for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.964485  502650 pod_ready.go:83] waiting for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.971030  502650 pod_ready.go:94] pod "etcd-embed-certs-204381" is "Ready"
	I1026 09:27:38.971081  502650 pod_ready.go:86] duration metric: took 6.55851ms for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.976031  502650 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.985786  502650 pod_ready.go:94] pod "kube-apiserver-embed-certs-204381" is "Ready"
	I1026 09:27:38.985862  502650 pod_ready.go:86] duration metric: took 9.745432ms for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.990047  502650 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.154375  502650 pod_ready.go:94] pod "kube-controller-manager-embed-certs-204381" is "Ready"
	I1026 09:27:39.154453  502650 pod_ready.go:86] duration metric: took 164.338426ms for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.355079  502650 pod_ready.go:83] waiting for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.754463  502650 pod_ready.go:94] pod "kube-proxy-75p8k" is "Ready"
	I1026 09:27:39.754539  502650 pod_ready.go:86] duration metric: took 399.389586ms for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.954894  502650 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:40.354304  502650 pod_ready.go:94] pod "kube-scheduler-embed-certs-204381" is "Ready"
	I1026 09:27:40.354332  502650 pod_ready.go:86] duration metric: took 399.369549ms for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:40.354350  502650 pod_ready.go:40] duration metric: took 32.908951965s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:40.442348  502650 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:27:40.446614  502650 out.go:179] * Done! kubectl is now configured to use "embed-certs-204381" cluster and "default" namespace by default
	W1026 09:27:38.197452  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:40.696363  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:42.696615  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:45.197328  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:47.695552  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:49.696166  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:51.696324  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 09:27:32 embed-certs-204381 crio[650]: time="2025-10-26T09:27:32.312700533Z" level=info msg="Removed container 2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn/dashboard-metrics-scraper" id=d7640327-853b-440b-83cb-640a0e8f274a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:27:36 embed-certs-204381 conmon[1136]: conmon 1f45d5982e892d775a90 <ninfo>: container 1138 exited with status 1
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.304896795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d74d5f8f-446d-43e9-89ec-f11248e6bf5a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.30587892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a25d8c95-2854-4e5c-9bab-9514fdb486c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.308085776Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6311c385-75b9-4a2d-9b90-e52505a5ceec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.30822117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320060505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320280594Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/377504a3a69f3d6e79fcc55c327a69632e23e6c00aa5a89264aa6859b634869a/merged/etc/passwd: no such file or directory"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.32030484Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/377504a3a69f3d6e79fcc55c327a69632e23e6c00aa5a89264aa6859b634869a/merged/etc/group: no such file or directory"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320631523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.37630809Z" level=info msg="Created container 9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4: kube-system/storage-provisioner/storage-provisioner" id=6311c385-75b9-4a2d-9b90-e52505a5ceec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.377391731Z" level=info msg="Starting container: 9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4" id=658be225-80a5-4c40-9890-d8d233446c67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.381398553Z" level=info msg="Started container" PID=1643 containerID=9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4 description=kube-system/storage-provisioner/storage-provisioner id=658be225-80a5-4c40-9890-d8d233446c67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f0d33a66a15f3820ea454918a616e812c166caba4333ed6c7ae5ea1184b16ec
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.000243478Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007755428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007791826Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007816311Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.012746607Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.012784441Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01280772Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01650239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.016534644Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.016561681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.019705878Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01974747Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9c9ac3d30f136       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   2f0d33a66a15f       storage-provisioner                          kube-system
	0f031cb298d46       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   36aa05901fbbf       dashboard-metrics-scraper-6ffb444bf9-9krqn   kubernetes-dashboard
	ac441bded3b54       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   1e8b33e108fcd       kubernetes-dashboard-855c9754f9-5ff88        kubernetes-dashboard
	e7f9e9902aa92       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   d8e1129e7c41f       coredns-66bc5c9577-r7mm4                     kube-system
	65f3891acebf1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   fd73e3b158d38       busybox                                      default
	1f45d5982e892       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   2f0d33a66a15f       storage-provisioner                          kube-system
	c589218ba65cf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   44959a7a119b2       kube-proxy-75p8k                             kube-system
	7ab5b4e25d7a5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   8ba3ada78c598       kindnet-dcxxb                                kube-system
	fc645c6e07eb5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           53 seconds ago      Running             etcd                        1                   9143ed426f484       etcd-embed-certs-204381                      kube-system
	d4d7f74617d8d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           53 seconds ago      Running             kube-scheduler              1                   c5f999d51131b       kube-scheduler-embed-certs-204381            kube-system
	c4cef10c093e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           53 seconds ago      Running             kube-apiserver              1                   1d16872b9d991       kube-apiserver-embed-certs-204381            kube-system
	c9cbd9d3e4cfa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           53 seconds ago      Running             kube-controller-manager     1                   408ea9e6fa40d       kube-controller-manager-embed-certs-204381   kube-system
	
	
	==> coredns [e7f9e9902aa925c7efff9d04c9c478d1ff7cb3c07814d288d0998109c3d5d770] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59417 - 27313 "HINFO IN 7150248882293645846.3583209349812123199. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048431877s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-204381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-204381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-204381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-204381
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:27:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-204381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4c29094e-f18a-4ac6-86a6-71f16f27aacd
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-r7mm4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m12s
	  kube-system                 etcd-embed-certs-204381                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m17s
	  kube-system                 kindnet-dcxxb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m12s
	  kube-system                 kube-apiserver-embed-certs-204381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-embed-certs-204381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-75p8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-embed-certs-204381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9krqn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ff88         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m10s                  kube-proxy       
	  Normal   Starting                 47s                    kube-proxy       
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x8 over 2m28s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m17s                  kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s                  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m17s                  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m13s                  node-controller  Node embed-certs-204381 event: Registered Node embed-certs-204381 in Controller
	  Normal   NodeReady                91s                    kubelet          Node embed-certs-204381 status is now: NodeReady
	  Normal   Starting                 55s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 55s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  54s (x8 over 54s)      kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x8 over 54s)      kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x8 over 54s)      kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                    node-controller  Node embed-certs-204381 event: Registered Node embed-certs-204381 in Controller
	
	
	==> dmesg <==
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fc645c6e07eb52dcf2d2a8c865d46ef41d8fb8a4a5bf76c369270785a3bb0d6e] <==
	{"level":"warn","ts":"2025-10-26T09:27:04.354081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.389118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.436821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.441986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.465136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.477621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.495785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.519743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.531282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.548600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.571366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.590939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.625329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.651554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.672000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.683680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.705667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.718986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.735021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.750325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.767197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.806984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.825049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.843269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.926796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:55 up  3:10,  0 user,  load average: 2.74, 3.48, 3.00
	Linux embed-certs-204381 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ab5b4e25d7a54fac31b2dca5a6e398e10f0bbd81c9e4e4407ddd084251219b7] <==
	I1026 09:27:06.716904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:27:06.735338       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:27:06.735487       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:27:06.735501       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:27:06.735530       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:27:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:27:07.000254       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:27:07.004475       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:27:07.004613       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:27:07.005749       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:27:37.000277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:27:37.005914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:27:37.006051       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:27:37.006149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 09:27:38.205768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:27:38.205877       1 metrics.go:72] Registering metrics
	I1026 09:27:38.205973       1 controller.go:711] "Syncing nftables rules"
	I1026 09:27:46.999887       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:27:46.999974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c4cef10c093e047656711f6ddd43f45e451b4234b38559cf8799fd096a53eda3] <==
	I1026 09:27:06.012535       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 09:27:06.026865       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 09:27:06.028329       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:27:06.028778       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:27:06.029008       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:27:06.029058       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:27:06.029368       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:27:06.029467       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:27:06.030019       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:27:06.056143       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:27:06.056174       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:27:06.056212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:27:06.056232       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:27:06.064376       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1026 09:27:06.114496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:27:06.518807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:27:06.939423       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:27:07.052362       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:27:07.154450       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:27:07.194579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:27:07.368859       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.154.122"}
	I1026 09:27:07.389526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.219.206"}
	I1026 09:27:09.446042       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:27:09.548305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:27:09.596002       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c9cbd9d3e4cfa1cf00ca6b7ab613ad7c0bbc25320fa33f24966b346c5cfee930] <==
	I1026 09:27:09.107820       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:27:09.107952       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:27:09.108042       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-204381"
	I1026 09:27:09.108127       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:27:09.109957       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:27:09.112450       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:27:09.114861       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:27:09.118278       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:27:09.120542       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:27:09.124969       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:27:09.127225       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:27:09.133414       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:27:09.135786       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:27:09.136521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:09.138820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:27:09.140068       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:27:09.140076       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:27:09.140128       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:09.140366       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:27:09.140400       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:27:09.140146       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:27:09.140159       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:27:09.147567       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:27:09.160719       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:27:09.163983       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [c589218ba65cfb5f8058e769abaec08033e797aa266d515739ceea95a26adbb3] <==
	I1026 09:27:07.064408       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:27:07.295618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:27:07.408422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:27:07.408494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:27:07.408560       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:27:07.449101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:27:07.449250       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:27:07.463866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:27:07.464281       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:27:07.464337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:07.465525       1 config.go:200] "Starting service config controller"
	I1026 09:27:07.465614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:27:07.465657       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:27:07.465684       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:27:07.465730       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:27:07.465762       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:27:07.466421       1 config.go:309] "Starting node config controller"
	I1026 09:27:07.466475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:27:07.466504       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:27:07.566944       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:27:07.567010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:27:07.567035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d4d7f74617d8d427b2faab1b3c5e48bbbae37682e6b48f8e1d3141a76e4a4b45] <==
	I1026 09:27:05.852770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:05.866804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:27:05.867003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:05.867020       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:05.867035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 09:27:05.911269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:27:05.911356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:27:05.911410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:27:05.911465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:27:05.911512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:27:05.911559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:27:05.911620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:27:05.911668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:27:05.911722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:27:05.911768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:27:05.911815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:27:05.911900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:27:05.911952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:27:05.911991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:27:05.912027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:27:05.912067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:27:05.912117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:27:05.912162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:27:05.912325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:27:07.568019       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:27:06 embed-certs-204381 kubelet[774]: W1026 09:27:06.391113     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228 WatchSource:0}: Error finding container fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228: Status 404 returned error can't find the container with id fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228
	Oct 26 09:27:06 embed-certs-204381 kubelet[774]: W1026 09:27:06.480393     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463 WatchSource:0}: Error finding container d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463: Status 404 returned error can't find the container with id d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463
	Oct 26 09:27:08 embed-certs-204381 kubelet[774]: I1026 09:27:08.569872     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840615     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4f0befad-4bc4-4a9c-9792-d5cffe2c2666-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9krqn\" (UID: \"4f0befad-4bc4-4a9c-9792-d5cffe2c2666\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840676     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1d408d28-04be-46eb-9ff5-f6ecf8801b89-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5ff88\" (UID: \"1d408d28-04be-46eb-9ff5-f6ecf8801b89\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840698     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mr52\" (UniqueName: \"kubernetes.io/projected/1d408d28-04be-46eb-9ff5-f6ecf8801b89-kube-api-access-6mr52\") pod \"kubernetes-dashboard-855c9754f9-5ff88\" (UID: \"1d408d28-04be-46eb-9ff5-f6ecf8801b89\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840723     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbx57\" (UniqueName: \"kubernetes.io/projected/4f0befad-4bc4-4a9c-9792-d5cffe2c2666-kube-api-access-hbx57\") pod \"dashboard-metrics-scraper-6ffb444bf9-9krqn\" (UID: \"4f0befad-4bc4-4a9c-9792-d5cffe2c2666\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn"
	Oct 26 09:27:10 embed-certs-204381 kubelet[774]: W1026 09:27:10.094307     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2 WatchSource:0}: Error finding container 36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2: Status 404 returned error can't find the container with id 36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2
	Oct 26 09:27:16 embed-certs-204381 kubelet[774]: I1026 09:27:16.234405     774 scope.go:117] "RemoveContainer" containerID="55c56d5884832176441c470ef85ae10a68cb4b165fa98600963aa42733a787f7"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: I1026 09:27:17.247773     774 scope.go:117] "RemoveContainer" containerID="55c56d5884832176441c470ef85ae10a68cb4b165fa98600963aa42733a787f7"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: I1026 09:27:17.248857     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: E1026 09:27:17.249117     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:20 embed-certs-204381 kubelet[774]: I1026 09:27:20.214545     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:20 embed-certs-204381 kubelet[774]: E1026 09:27:20.214771     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:22 embed-certs-204381 kubelet[774]: I1026 09:27:22.318244     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88" podStartSLOduration=1.884338778 podStartE2EDuration="13.318227455s" podCreationTimestamp="2025-10-26 09:27:09 +0000 UTC" firstStartedPulling="2025-10-26 09:27:10.136233756 +0000 UTC m=+9.385309098" lastFinishedPulling="2025-10-26 09:27:21.570122433 +0000 UTC m=+20.819197775" observedRunningTime="2025-10-26 09:27:22.317812919 +0000 UTC m=+21.566888269" watchObservedRunningTime="2025-10-26 09:27:22.318227455 +0000 UTC m=+21.567302805"
	Oct 26 09:27:32 embed-certs-204381 kubelet[774]: I1026 09:27:32.114033     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:32 embed-certs-204381 kubelet[774]: I1026 09:27:32.289510     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:33 embed-certs-204381 kubelet[774]: I1026 09:27:33.293070     774 scope.go:117] "RemoveContainer" containerID="0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	Oct 26 09:27:33 embed-certs-204381 kubelet[774]: E1026 09:27:33.293229     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:37 embed-certs-204381 kubelet[774]: I1026 09:27:37.304491     774 scope.go:117] "RemoveContainer" containerID="1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247"
	Oct 26 09:27:40 embed-certs-204381 kubelet[774]: I1026 09:27:40.214135     774 scope.go:117] "RemoveContainer" containerID="0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	Oct 26 09:27:40 embed-certs-204381 kubelet[774]: E1026 09:27:40.214891     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ac441bded3b54ada4b84416b407f28fd84714df732c42deea0ac4709a5553635] <==
	2025/10/26 09:27:21 Using namespace: kubernetes-dashboard
	2025/10/26 09:27:21 Using in-cluster config to connect to apiserver
	2025/10/26 09:27:21 Using secret token for csrf signing
	2025/10/26 09:27:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:27:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:27:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:27:21 Generating JWE encryption key
	2025/10/26 09:27:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:27:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:27:22 Initializing JWE encryption key from synchronized object
	2025/10/26 09:27:22 Creating in-cluster Sidecar client
	2025/10/26 09:27:22 Serving insecurely on HTTP port: 9090
	2025/10/26 09:27:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:21 Starting overwatch
	
	
	==> storage-provisioner [1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247] <==
	I1026 09:27:06.712377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:27:36.717097       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4] <==
	I1026 09:27:37.398343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:27:37.423887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:27:37.426286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:27:37.432447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:40.887162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:45.148970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:48.747049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:51.800192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.822892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.828524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:27:54.828687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:27:54.831097       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08!
	I1026 09:27:54.840164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff3b783c-a30e-49f8-b18c-92455e17892c", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08 became leader
	W1026 09:27:54.841786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.851672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:27:54.932582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204381 -n embed-certs-204381: exit status 2 (372.271017ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-204381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-204381
helpers_test.go:243: (dbg) docker inspect embed-certs-204381:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	        "Created": "2025-10-26T09:25:07.035838779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:26:52.60501554Z",
	            "FinishedAt": "2025-10-26T09:26:51.730765633Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/hosts",
	        "LogPath": "/var/lib/docker/containers/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab-json.log",
	        "Name": "/embed-certs-204381",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-204381:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-204381",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab",
	                "LowerDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39f02fc23eec16a4a9133efb81655c8ddaef79801f2d22f17ad6df88e7f73da6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-204381",
	                "Source": "/var/lib/docker/volumes/embed-certs-204381/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-204381",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-204381",
	                "name.minikube.sigs.k8s.io": "embed-certs-204381",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7123c2cf08f742ea30613fd80c9cbc0dad21352d36b2bb7b63ee3645a0f36ac1",
	            "SandboxKey": "/var/run/docker/netns/7123c2cf08f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-204381": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:1c:84:ea:88:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33c235a08e203b1c326fabab7473b4ca038ba835f19a85fcec21303edd44d5d4",
	                    "EndpointID": "cf750c0573da88cd0bba555a484b3cb6149345724d492672074c17a4acd43486",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-204381",
	                        "fbf6b6fb12ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381: exit status 2 (351.883189ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-204381 logs -n 25: (1.330377379s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-289159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-289159 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ start   │ -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ image   │ old-k8s-version-167519 image list --format=json                                                                                                                          │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:24 UTC │
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                    │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                          │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                          │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                          │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                             │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                              │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                              │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                             │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:27:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:27:11.981589  505287 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:27:11.981791  505287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:11.981814  505287 out.go:374] Setting ErrFile to fd 2...
	I1026 09:27:11.981841  505287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:27:11.982119  505287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:27:11.982539  505287 out.go:368] Setting JSON to false
	I1026 09:27:11.983584  505287 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11382,"bootTime":1761459450,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:27:11.983756  505287 start.go:141] virtualization:  
	I1026 09:27:11.989842  505287 out.go:179] * [no-preload-491604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:27:11.993308  505287 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:27:11.993382  505287 notify.go:220] Checking for updates...
	I1026 09:27:12.000313  505287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:27:12.004221  505287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:12.008072  505287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:27:12.011333  505287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:27:12.014526  505287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:27:12.018246  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:12.018890  505287 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:27:12.053228  505287 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:27:12.053360  505287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:27:12.144737  505287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:27:12.133709998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:27:12.144866  505287 docker.go:318] overlay module found
	I1026 09:27:12.148345  505287 out.go:179] * Using the docker driver based on existing profile
	I1026 09:27:12.151776  505287 start.go:305] selected driver: docker
	I1026 09:27:12.151804  505287 start.go:925] validating driver "docker" against &{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:12.151947  505287 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:27:12.152658  505287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:27:12.265196  505287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:27:12.252598908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:27:12.265555  505287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:12.265592  505287 cni.go:84] Creating CNI manager for ""
	I1026 09:27:12.265650  505287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:27:12.265695  505287 start.go:349] cluster config:
	{Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:12.270500  505287 out.go:179] * Starting "no-preload-491604" primary control-plane node in "no-preload-491604" cluster
	I1026 09:27:12.274069  505287 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:27:12.277572  505287 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:27:07.406578  502650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:27:07.407708  502650 api_server.go:141] control plane version: v1.34.1
	I1026 09:27:07.407741  502650 api_server.go:131] duration metric: took 9.933947ms to wait for apiserver health ...
	I1026 09:27:07.407751  502650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:27:07.407939  502650 addons.go:514] duration metric: took 5.575307832s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 09:27:07.412123  502650 system_pods.go:59] 8 kube-system pods found
	I1026 09:27:07.412170  502650 system_pods.go:61] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:07.412198  502650 system_pods.go:61] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:07.412212  502650 system_pods.go:61] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:27:07.412239  502650 system_pods.go:61] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:07.412254  502650 system_pods.go:61] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:07.412270  502650 system_pods.go:61] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:27:07.412286  502650 system_pods.go:61] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:07.412292  502650 system_pods.go:61] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Running
	I1026 09:27:07.412317  502650 system_pods.go:74] duration metric: took 4.557762ms to wait for pod list to return data ...
	I1026 09:27:07.412332  502650 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:27:07.415742  502650 default_sa.go:45] found service account: "default"
	I1026 09:27:07.415777  502650 default_sa.go:55] duration metric: took 3.430954ms for default service account to be created ...
	I1026 09:27:07.415787  502650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:27:07.418917  502650 system_pods.go:86] 8 kube-system pods found
	I1026 09:27:07.418955  502650 system_pods.go:89] "coredns-66bc5c9577-r7mm4" [4c074f51-2576-4be7-8643-1ee880c3182d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:07.418965  502650 system_pods.go:89] "etcd-embed-certs-204381" [5c1b1286-6d91-4361-97bc-6598163048d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:07.418971  502650 system_pods.go:89] "kindnet-dcxxb" [5ed4ad7a-2caf-4f87-b112-82c3f27fe3c3] Running
	I1026 09:27:07.418980  502650 system_pods.go:89] "kube-apiserver-embed-certs-204381" [2df530d7-ea45-404f-a86f-d701d04b8379] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:07.418987  502650 system_pods.go:89] "kube-controller-manager-embed-certs-204381" [19ddfb8a-a939-47f0-9322-198bc0344502] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:07.418996  502650 system_pods.go:89] "kube-proxy-75p8k" [65c22908-92e3-48d1-a15d-8c695de4420a] Running
	I1026 09:27:07.419005  502650 system_pods.go:89] "kube-scheduler-embed-certs-204381" [87fb728e-860d-4eb1-baee-0f75cf513de3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:07.419014  502650 system_pods.go:89] "storage-provisioner" [0ed81b53-0c23-47f0-9e38-122cd2bf5f0a] Running
	I1026 09:27:07.419021  502650 system_pods.go:126] duration metric: took 3.229293ms to wait for k8s-apps to be running ...
	I1026 09:27:07.419034  502650 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:27:07.419088  502650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:07.435852  502650 system_svc.go:56] duration metric: took 16.806382ms WaitForService to wait for kubelet
	I1026 09:27:07.435883  502650 kubeadm.go:586] duration metric: took 5.603601124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:07.435901  502650 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:27:07.440622  502650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:27:07.440656  502650 node_conditions.go:123] node cpu capacity is 2
	I1026 09:27:07.440670  502650 node_conditions.go:105] duration metric: took 4.729867ms to run NodePressure ...
	I1026 09:27:07.440683  502650 start.go:241] waiting for startup goroutines ...
	I1026 09:27:07.440691  502650 start.go:246] waiting for cluster config update ...
	I1026 09:27:07.440702  502650 start.go:255] writing updated cluster config ...
	I1026 09:27:07.440994  502650 ssh_runner.go:195] Run: rm -f paused
	I1026 09:27:07.445367  502650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:07.449664  502650 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:27:09.459980  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:11.957007  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:12.280837  505287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:27:12.280996  505287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:27:12.281304  505287 cache.go:107] acquiring lock: {Name:mkdad500968e7139280738b23aa2f2a019253f5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281382  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 09:27:12.281390  505287 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.06µs
	I1026 09:27:12.281398  505287 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 09:27:12.281410  505287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:27:12.281606  505287 cache.go:107] acquiring lock: {Name:mk599bfcacc3fab2a4670e80f471bbbcaed32bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281658  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 09:27:12.281666  505287 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 67.168µs
	I1026 09:27:12.281673  505287 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 09:27:12.281684  505287 cache.go:107] acquiring lock: {Name:mk14bfa53cd66a6ca87d606642a3cbb2da8dfbc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281713  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 09:27:12.281718  505287 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.759µs
	I1026 09:27:12.281724  505287 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 09:27:12.281734  505287 cache.go:107] acquiring lock: {Name:mk7d0c8b8f0317e07f3637091202b09c4c80488b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281761  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 09:27:12.281766  505287 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.624µs
	I1026 09:27:12.281772  505287 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 09:27:12.281783  505287 cache.go:107] acquiring lock: {Name:mk38cdae88a1b6a128486f22f7bf9cbf423409f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281808  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 09:27:12.281813  505287 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.966µs
	I1026 09:27:12.281819  505287 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 09:27:12.281830  505287 cache.go:107] acquiring lock: {Name:mk1911c569c908e58b6e7e7f80fbc6513309fcca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281855  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 09:27:12.281861  505287 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.541µs
	I1026 09:27:12.281866  505287 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 09:27:12.281875  505287 cache.go:107] acquiring lock: {Name:mkec65762826ae78f9cb76c49217646d15db3a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281900  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 09:27:12.281905  505287 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.024µs
	I1026 09:27:12.281923  505287 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 09:27:12.281934  505287 cache.go:107] acquiring lock: {Name:mk439f753472c6d4dacbd31dbea66f1a2f133a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.281961  505287 cache.go:115] /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 09:27:12.281966  505287 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.453µs
	I1026 09:27:12.281972  505287 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 09:27:12.281978  505287 cache.go:87] Successfully saved all images to host disk.
	I1026 09:27:12.304680  505287 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:27:12.304709  505287 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:27:12.304725  505287 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:27:12.304761  505287 start.go:360] acquireMachinesLock for no-preload-491604: {Name:mkc6d58300c0451128c3270d72a7123ff4bec2e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:27:12.304819  505287 start.go:364] duration metric: took 37.375µs to acquireMachinesLock for "no-preload-491604"
	I1026 09:27:12.304845  505287 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:27:12.304858  505287 fix.go:54] fixHost starting: 
	I1026 09:27:12.305122  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:12.324470  505287 fix.go:112] recreateIfNeeded on no-preload-491604: state=Stopped err=<nil>
	W1026 09:27:12.324500  505287 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:27:12.327778  505287 out.go:252] * Restarting existing docker container for "no-preload-491604" ...
	I1026 09:27:12.327862  505287 cli_runner.go:164] Run: docker start no-preload-491604
	I1026 09:27:12.680145  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:12.726526  505287 kic.go:430] container "no-preload-491604" state is running.
	I1026 09:27:12.726983  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:12.757263  505287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/config.json ...
	I1026 09:27:12.757486  505287 machine.go:93] provisionDockerMachine start ...
	I1026 09:27:12.757544  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:12.782421  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:12.782796  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:12.782808  505287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:27:12.783546  505287 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38080->127.0.0.1:33455: read: connection reset by peer
	I1026 09:27:15.946522  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:27:15.946556  505287 ubuntu.go:182] provisioning hostname "no-preload-491604"
	I1026 09:27:15.946617  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:15.969504  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:15.969859  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:15.969890  505287 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-491604 && echo "no-preload-491604" | sudo tee /etc/hostname
	I1026 09:27:16.152999  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-491604
	
	I1026 09:27:16.153078  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:16.176825  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:16.177138  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:16.177159  505287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-491604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-491604/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-491604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:27:16.353124  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:27:16.353158  505287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:27:16.353179  505287 ubuntu.go:190] setting up certificates
	I1026 09:27:16.353190  505287 provision.go:84] configureAuth start
	I1026 09:27:16.353249  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:16.384420  505287 provision.go:143] copyHostCerts
	I1026 09:27:16.384488  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:27:16.384512  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:27:16.384586  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:27:16.384699  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:27:16.384710  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:27:16.384737  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:27:16.384800  505287 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:27:16.384808  505287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:27:16.384832  505287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:27:16.384892  505287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.no-preload-491604 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-491604]
	W1026 09:27:14.458346  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:16.956480  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:17.512825  505287 provision.go:177] copyRemoteCerts
	I1026 09:27:17.512897  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:27:17.512957  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:17.555177  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:17.664670  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:27:17.686635  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:27:17.710074  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:27:17.733478  505287 provision.go:87] duration metric: took 1.380270859s to configureAuth
	I1026 09:27:17.733505  505287 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:27:17.733717  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:17.733839  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:17.753772  505287 main.go:141] libmachine: Using SSH client type: native
	I1026 09:27:17.754098  505287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1026 09:27:17.754119  505287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:27:18.119512  505287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:27:18.119538  505287 machine.go:96] duration metric: took 5.362043072s to provisionDockerMachine
	I1026 09:27:18.119549  505287 start.go:293] postStartSetup for "no-preload-491604" (driver="docker")
	I1026 09:27:18.119562  505287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:27:18.119622  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:27:18.119681  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.143932  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.275532  505287 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:27:18.279342  505287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:27:18.279373  505287 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:27:18.279385  505287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:27:18.279450  505287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:27:18.279530  505287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:27:18.279633  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:27:18.295639  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:27:18.325846  505287 start.go:296] duration metric: took 206.281238ms for postStartSetup
	I1026 09:27:18.325939  505287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:27:18.326001  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.356161  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.464626  505287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:27:18.472069  505287 fix.go:56] duration metric: took 6.167204169s for fixHost
	I1026 09:27:18.472102  505287 start.go:83] releasing machines lock for "no-preload-491604", held for 6.167270057s
	I1026 09:27:18.472175  505287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-491604
	I1026 09:27:18.495188  505287 ssh_runner.go:195] Run: cat /version.json
	I1026 09:27:18.495229  505287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:27:18.495237  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.495297  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:18.525912  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.533438  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:18.772626  505287 ssh_runner.go:195] Run: systemctl --version
	I1026 09:27:18.781746  505287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:27:18.863320  505287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:27:18.873116  505287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:27:18.873190  505287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:27:18.891743  505287 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:27:18.891768  505287 start.go:495] detecting cgroup driver to use...
	I1026 09:27:18.891799  505287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:27:18.891868  505287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:27:18.922197  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:27:18.946894  505287 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:27:18.947011  505287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:27:18.966018  505287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:27:18.985087  505287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:27:19.198821  505287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:27:19.371674  505287 docker.go:234] disabling docker service ...
	I1026 09:27:19.371800  505287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:27:19.389129  505287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:27:19.403143  505287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:27:19.556296  505287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:27:19.742206  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:27:19.764702  505287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:27:19.782733  505287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:27:19.782847  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.792907  505287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:27:19.793063  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.802133  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.811776  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.821238  505287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:27:19.835257  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.847629  505287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.857425  505287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:27:19.867126  505287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:27:19.876883  505287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:27:19.884290  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:20.052046  505287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:27:20.342258  505287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:27:20.342331  505287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:27:20.347395  505287 start.go:563] Will wait 60s for crictl version
	I1026 09:27:20.347453  505287 ssh_runner.go:195] Run: which crictl
	I1026 09:27:20.352046  505287 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:27:20.390395  505287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:27:20.390498  505287 ssh_runner.go:195] Run: crio --version
	I1026 09:27:20.427956  505287 ssh_runner.go:195] Run: crio --version
	I1026 09:27:20.479619  505287 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:27:20.482515  505287 cli_runner.go:164] Run: docker network inspect no-preload-491604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:27:20.498530  505287 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:27:20.504927  505287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:27:20.520134  505287 kubeadm.go:883] updating cluster {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:27:20.520243  505287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:27:20.520285  505287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:27:20.582084  505287 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:27:20.582109  505287 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:27:20.582121  505287 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:27:20.582211  505287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-491604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:27:20.582295  505287 ssh_runner.go:195] Run: crio config
	I1026 09:27:20.661497  505287 cni.go:84] Creating CNI manager for ""
	I1026 09:27:20.661522  505287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:27:20.661538  505287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:27:20.661562  505287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-491604 NodeName:no-preload-491604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:27:20.661700  505287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-491604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:27:20.661771  505287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:27:20.673520  505287 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:27:20.673597  505287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:27:20.682955  505287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:27:20.698821  505287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:27:20.714746  505287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 09:27:20.729551  505287 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:27:20.734324  505287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:27:20.746112  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:20.931367  505287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:27:20.957410  505287 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604 for IP: 192.168.85.2
	I1026 09:27:20.957430  505287 certs.go:195] generating shared ca certs ...
	I1026 09:27:20.957446  505287 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:20.957583  505287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:27:20.957641  505287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:27:20.957649  505287 certs.go:257] generating profile certs ...
	I1026 09:27:20.957727  505287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.key
	I1026 09:27:20.957792  505287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key.1aa4df19
	I1026 09:27:20.957827  505287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key
	I1026 09:27:20.957932  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:27:20.957976  505287 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:27:20.957990  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:27:20.958015  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:27:20.958042  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:27:20.958074  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:27:20.958124  505287 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:27:20.958885  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:27:21.009459  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:27:21.048345  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:27:21.083281  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:27:21.132351  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:27:21.167831  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:27:21.200102  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:27:21.255442  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:27:21.308432  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:27:21.341784  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:27:21.398967  505287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:27:21.426463  505287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:27:21.444253  505287 ssh_runner.go:195] Run: openssl version
	I1026 09:27:21.457548  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:27:21.470220  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.474984  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.475103  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:27:21.521627  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:27:21.532697  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:27:21.544290  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.548633  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.548702  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:27:21.599415  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:27:21.608103  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:27:21.617117  505287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.621907  505287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.622031  505287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:27:21.674938  505287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:27:21.684609  505287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:27:21.689447  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:27:21.733532  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:27:21.779459  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:27:21.871089  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:27:21.915428  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:27:22.017028  505287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:27:22.096794  505287 kubeadm.go:400] StartCluster: {Name:no-preload-491604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-491604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:27:22.096933  505287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:27:22.097043  505287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:27:22.192475  505287 cri.go:89] found id: "4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e"
	I1026 09:27:22.192556  505287 cri.go:89] found id: "69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b"
	I1026 09:27:22.192575  505287 cri.go:89] found id: "4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81"
	I1026 09:27:22.192595  505287 cri.go:89] found id: ""
	I1026 09:27:22.192678  505287 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:27:22.244998  505287 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:27:22Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:27:22.245162  505287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:27:22.254644  505287 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:27:22.254802  505287 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:27:22.254905  505287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:27:22.269849  505287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:27:22.270903  505287 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-491604" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:22.271558  505287 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-491604" cluster setting kubeconfig missing "no-preload-491604" context setting]
	I1026 09:27:22.272655  505287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.274687  505287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:27:22.287302  505287 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 09:27:22.287334  505287 kubeadm.go:601] duration metric: took 32.502923ms to restartPrimaryControlPlane
	I1026 09:27:22.287342  505287 kubeadm.go:402] duration metric: took 190.560082ms to StartCluster
	I1026 09:27:22.287356  505287 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.287424  505287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:27:22.292542  505287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:27:22.292933  505287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:27:22.293373  505287 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:27:22.293470  505287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:27:22.293657  505287 addons.go:69] Setting storage-provisioner=true in profile "no-preload-491604"
	I1026 09:27:22.293689  505287 addons.go:238] Setting addon storage-provisioner=true in "no-preload-491604"
	W1026 09:27:22.293723  505287 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:27:22.293780  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.293847  505287 addons.go:69] Setting dashboard=true in profile "no-preload-491604"
	I1026 09:27:22.299790  505287 addons.go:238] Setting addon dashboard=true in "no-preload-491604"
	W1026 09:27:22.299820  505287 addons.go:247] addon dashboard should already be in state true
	I1026 09:27:22.299882  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.300250  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.300423  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.293967  505287 addons.go:69] Setting default-storageclass=true in profile "no-preload-491604"
	I1026 09:27:22.301681  505287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-491604"
	I1026 09:27:22.297195  505287 out.go:179] * Verifying Kubernetes components...
	W1026 09:27:18.959244  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:20.963467  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:22.302180  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.307579  505287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:27:22.364151  505287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:27:22.364277  505287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:27:22.367944  505287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:27:22.367968  505287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:27:22.368035  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.373837  505287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:27:22.375582  505287 addons.go:238] Setting addon default-storageclass=true in "no-preload-491604"
	W1026 09:27:22.375600  505287 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:27:22.375674  505287 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:27:22.376222  505287 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:27:22.378986  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:27:22.379011  505287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:27:22.379077  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.423991  505287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:27:22.424012  505287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:27:22.424076  505287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:27:22.427608  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.446051  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.467180  505287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:27:22.702306  505287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:27:22.703147  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:27:22.725152  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:27:22.743811  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:27:22.743882  505287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:27:22.774138  505287 node_ready.go:35] waiting up to 6m0s for node "no-preload-491604" to be "Ready" ...
	I1026 09:27:22.814654  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:27:22.814680  505287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:27:22.930573  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:27:22.930599  505287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:27:23.003333  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:27:23.003363  505287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:27:23.022660  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:27:23.022686  505287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:27:23.050537  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:27:23.050608  505287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:27:23.072596  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:27:23.072623  505287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:27:23.096583  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:27:23.096608  505287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:27:23.110764  505287 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:27:23.110796  505287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:27:23.127841  505287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 09:27:23.455947  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:25.456482  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:27.300383  505287 node_ready.go:49] node "no-preload-491604" is "Ready"
	I1026 09:27:27.300414  505287 node_ready.go:38] duration metric: took 4.526183715s for node "no-preload-491604" to be "Ready" ...
	I1026 09:27:27.300430  505287 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:27:27.300496  505287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:27:27.692909  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.989513004s)
	I1026 09:27:28.748603  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.023367802s)
	I1026 09:27:29.098069  505287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.970182183s)
	I1026 09:27:29.098284  505287 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.797769532s)
	I1026 09:27:29.098308  505287 api_server.go:72] duration metric: took 6.805309707s to wait for apiserver process to appear ...
	I1026 09:27:29.098315  505287 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:27:29.098333  505287 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:27:29.101313  505287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-491604 addons enable metrics-server
	
	I1026 09:27:29.104251  505287 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 09:27:29.107178  505287 addons.go:514] duration metric: took 6.813676535s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 09:27:29.115850  505287 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 09:27:29.115885  505287 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 09:27:29.598461  505287 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 09:27:29.609011  505287 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 09:27:29.611081  505287 api_server.go:141] control plane version: v1.34.1
	I1026 09:27:29.611147  505287 api_server.go:131] duration metric: took 512.825828ms to wait for apiserver health ...
	I1026 09:27:29.611183  505287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:27:29.614867  505287 system_pods.go:59] 8 kube-system pods found
	I1026 09:27:29.614946  505287 system_pods.go:61] "coredns-66bc5c9577-2rq75" [b400112c-40a5-4ef6-82d5-b4533cb6e4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:29.614971  505287 system_pods.go:61] "etcd-no-preload-491604" [dfad6de6-c15a-4fc5-b549-b2fee23d4c8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:29.615006  505287 system_pods.go:61] "kindnet-4g8pl" [c83a24cf-3ae8-42a5-9f26-13ff5989e6ee] Running
	I1026 09:27:29.615032  505287 system_pods.go:61] "kube-apiserver-no-preload-491604" [78d09308-6568-4c2b-8264-06e86e844c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:29.615054  505287 system_pods.go:61] "kube-controller-manager-no-preload-491604" [1d362bb3-9059-461c-a691-5a5c8404168b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:29.615075  505287 system_pods.go:61] "kube-proxy-tpv97" [669ea85d-25d5-4e3e-b4b6-1c86141967f3] Running
	I1026 09:27:29.615109  505287 system_pods.go:61] "kube-scheduler-no-preload-491604" [44ecdc58-d295-4e3d-a881-4933cc93233f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:29.615133  505287 system_pods.go:61] "storage-provisioner" [a5a9e6e7-af2c-4731-bedc-f98677818988] Running
	I1026 09:27:29.615155  505287 system_pods.go:74] duration metric: took 3.952544ms to wait for pod list to return data ...
	I1026 09:27:29.615174  505287 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:27:29.658804  505287 default_sa.go:45] found service account: "default"
	I1026 09:27:29.658875  505287 default_sa.go:55] duration metric: took 43.679659ms for default service account to be created ...
	I1026 09:27:29.658899  505287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 09:27:29.662849  505287 system_pods.go:86] 8 kube-system pods found
	I1026 09:27:29.662936  505287 system_pods.go:89] "coredns-66bc5c9577-2rq75" [b400112c-40a5-4ef6-82d5-b4533cb6e4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 09:27:29.662967  505287 system_pods.go:89] "etcd-no-preload-491604" [dfad6de6-c15a-4fc5-b549-b2fee23d4c8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:27:29.663003  505287 system_pods.go:89] "kindnet-4g8pl" [c83a24cf-3ae8-42a5-9f26-13ff5989e6ee] Running
	I1026 09:27:29.663035  505287 system_pods.go:89] "kube-apiserver-no-preload-491604" [78d09308-6568-4c2b-8264-06e86e844c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:27:29.663058  505287 system_pods.go:89] "kube-controller-manager-no-preload-491604" [1d362bb3-9059-461c-a691-5a5c8404168b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:27:29.663079  505287 system_pods.go:89] "kube-proxy-tpv97" [669ea85d-25d5-4e3e-b4b6-1c86141967f3] Running
	I1026 09:27:29.663115  505287 system_pods.go:89] "kube-scheduler-no-preload-491604" [44ecdc58-d295-4e3d-a881-4933cc93233f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:27:29.663140  505287 system_pods.go:89] "storage-provisioner" [a5a9e6e7-af2c-4731-bedc-f98677818988] Running
	I1026 09:27:29.663166  505287 system_pods.go:126] duration metric: took 4.246045ms to wait for k8s-apps to be running ...
	I1026 09:27:29.663186  505287 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 09:27:29.663277  505287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:27:29.678620  505287 system_svc.go:56] duration metric: took 15.42457ms WaitForService to wait for kubelet
	I1026 09:27:29.678693  505287 kubeadm.go:586] duration metric: took 7.38569256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:27:29.678805  505287 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:27:29.681544  505287 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:27:29.681578  505287 node_conditions.go:123] node cpu capacity is 2
	I1026 09:27:29.681591  505287 node_conditions.go:105] duration metric: took 2.767421ms to run NodePressure ...
	I1026 09:27:29.681622  505287 start.go:241] waiting for startup goroutines ...
	I1026 09:27:29.681636  505287 start.go:246] waiting for cluster config update ...
	I1026 09:27:29.681648  505287 start.go:255] writing updated cluster config ...
	I1026 09:27:29.681956  505287 ssh_runner.go:195] Run: rm -f paused
	I1026 09:27:29.686663  505287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:29.690099  505287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rq75" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 09:27:31.695875  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:27.956352  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:30.454839  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:33.696999  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:36.195381  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:32.455221  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:34.956700  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	W1026 09:27:37.470285  502650 pod_ready.go:104] pod "coredns-66bc5c9577-r7mm4" is not "Ready", error: <nil>
	I1026 09:27:38.957854  502650 pod_ready.go:94] pod "coredns-66bc5c9577-r7mm4" is "Ready"
	I1026 09:27:38.957889  502650 pod_ready.go:86] duration metric: took 31.508199172s for pod "coredns-66bc5c9577-r7mm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.964485  502650 pod_ready.go:83] waiting for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.971030  502650 pod_ready.go:94] pod "etcd-embed-certs-204381" is "Ready"
	I1026 09:27:38.971081  502650 pod_ready.go:86] duration metric: took 6.55851ms for pod "etcd-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.976031  502650 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.985786  502650 pod_ready.go:94] pod "kube-apiserver-embed-certs-204381" is "Ready"
	I1026 09:27:38.985862  502650 pod_ready.go:86] duration metric: took 9.745432ms for pod "kube-apiserver-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:38.990047  502650 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.154375  502650 pod_ready.go:94] pod "kube-controller-manager-embed-certs-204381" is "Ready"
	I1026 09:27:39.154453  502650 pod_ready.go:86] duration metric: took 164.338426ms for pod "kube-controller-manager-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.355079  502650 pod_ready.go:83] waiting for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.754463  502650 pod_ready.go:94] pod "kube-proxy-75p8k" is "Ready"
	I1026 09:27:39.754539  502650 pod_ready.go:86] duration metric: took 399.389586ms for pod "kube-proxy-75p8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:39.954894  502650 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:40.354304  502650 pod_ready.go:94] pod "kube-scheduler-embed-certs-204381" is "Ready"
	I1026 09:27:40.354332  502650 pod_ready.go:86] duration metric: took 399.369549ms for pod "kube-scheduler-embed-certs-204381" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:27:40.354350  502650 pod_ready.go:40] duration metric: took 32.908951965s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:27:40.442348  502650 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:27:40.446614  502650 out.go:179] * Done! kubectl is now configured to use "embed-certs-204381" cluster and "default" namespace by default
	W1026 09:27:38.197452  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:40.696363  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:42.696615  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:45.197328  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:47.695552  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:49.696166  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:27:51.696324  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 09:27:32 embed-certs-204381 crio[650]: time="2025-10-26T09:27:32.312700533Z" level=info msg="Removed container 2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn/dashboard-metrics-scraper" id=d7640327-853b-440b-83cb-640a0e8f274a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:27:36 embed-certs-204381 conmon[1136]: conmon 1f45d5982e892d775a90 <ninfo>: container 1138 exited with status 1
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.304896795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d74d5f8f-446d-43e9-89ec-f11248e6bf5a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.30587892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a25d8c95-2854-4e5c-9bab-9514fdb486c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.308085776Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6311c385-75b9-4a2d-9b90-e52505a5ceec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.30822117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320060505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320280594Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/377504a3a69f3d6e79fcc55c327a69632e23e6c00aa5a89264aa6859b634869a/merged/etc/passwd: no such file or directory"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.32030484Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/377504a3a69f3d6e79fcc55c327a69632e23e6c00aa5a89264aa6859b634869a/merged/etc/group: no such file or directory"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.320631523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.37630809Z" level=info msg="Created container 9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4: kube-system/storage-provisioner/storage-provisioner" id=6311c385-75b9-4a2d-9b90-e52505a5ceec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.377391731Z" level=info msg="Starting container: 9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4" id=658be225-80a5-4c40-9890-d8d233446c67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:27:37 embed-certs-204381 crio[650]: time="2025-10-26T09:27:37.381398553Z" level=info msg="Started container" PID=1643 containerID=9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4 description=kube-system/storage-provisioner/storage-provisioner id=658be225-80a5-4c40-9890-d8d233446c67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f0d33a66a15f3820ea454918a616e812c166caba4333ed6c7ae5ea1184b16ec
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.000243478Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007755428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007791826Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.007816311Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.012746607Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.012784441Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01280772Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01650239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.016534644Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.016561681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.019705878Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:27:47 embed-certs-204381 crio[650]: time="2025-10-26T09:27:47.01974747Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9c9ac3d30f136       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   2f0d33a66a15f       storage-provisioner                          kube-system
	0f031cb298d46       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   36aa05901fbbf       dashboard-metrics-scraper-6ffb444bf9-9krqn   kubernetes-dashboard
	ac441bded3b54       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   1e8b33e108fcd       kubernetes-dashboard-855c9754f9-5ff88        kubernetes-dashboard
	e7f9e9902aa92       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   d8e1129e7c41f       coredns-66bc5c9577-r7mm4                     kube-system
	65f3891acebf1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   fd73e3b158d38       busybox                                      default
	1f45d5982e892       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   2f0d33a66a15f       storage-provisioner                          kube-system
	c589218ba65cf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   44959a7a119b2       kube-proxy-75p8k                             kube-system
	7ab5b4e25d7a5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   8ba3ada78c598       kindnet-dcxxb                                kube-system
	fc645c6e07eb5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           55 seconds ago      Running             etcd                        1                   9143ed426f484       etcd-embed-certs-204381                      kube-system
	d4d7f74617d8d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           55 seconds ago      Running             kube-scheduler              1                   c5f999d51131b       kube-scheduler-embed-certs-204381            kube-system
	c4cef10c093e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           55 seconds ago      Running             kube-apiserver              1                   1d16872b9d991       kube-apiserver-embed-certs-204381            kube-system
	c9cbd9d3e4cfa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           55 seconds ago      Running             kube-controller-manager     1                   408ea9e6fa40d       kube-controller-manager-embed-certs-204381   kube-system
	
	
	==> coredns [e7f9e9902aa925c7efff9d04c9c478d1ff7cb3c07814d288d0998109c3d5d770] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59417 - 27313 "HINFO IN 7150248882293645846.3583209349812123199. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048431877s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-204381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-204381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-204381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-204381
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:27:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:25:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:27:36 +0000   Sun, 26 Oct 2025 09:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-204381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4c29094e-f18a-4ac6-86a6-71f16f27aacd
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-r7mm4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-embed-certs-204381                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-dcxxb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-embed-certs-204381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-embed-certs-204381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-75p8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-embed-certs-204381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9krqn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ff88         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m19s                  kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s                  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m19s                  kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m15s                  node-controller  Node embed-certs-204381 event: Registered Node embed-certs-204381 in Controller
	  Normal   NodeReady                93s                    kubelet          Node embed-certs-204381 status is now: NodeReady
	  Normal   Starting                 57s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)      kubelet          Node embed-certs-204381 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)      kubelet          Node embed-certs-204381 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)      kubelet          Node embed-certs-204381 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node embed-certs-204381 event: Registered Node embed-certs-204381 in Controller
	
	
	==> dmesg <==
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fc645c6e07eb52dcf2d2a8c865d46ef41d8fb8a4a5bf76c369270785a3bb0d6e] <==
	{"level":"warn","ts":"2025-10-26T09:27:04.354081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.389118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.436821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.441986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.465136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.477621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.495785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.519743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.531282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.548600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.571366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.590939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.625329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.651554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.672000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.683680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.705667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.718986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.735021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.750325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.767197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.806984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.825049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.843269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:04.926796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:57 up  3:10,  0 user,  load average: 2.74, 3.48, 3.00
	Linux embed-certs-204381 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ab5b4e25d7a54fac31b2dca5a6e398e10f0bbd81c9e4e4407ddd084251219b7] <==
	I1026 09:27:06.716904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:27:06.735338       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:27:06.735487       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:27:06.735501       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:27:06.735530       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:27:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:27:07.000254       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:27:07.004475       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:27:07.004613       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:27:07.005749       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:27:37.000277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 09:27:37.005914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:27:37.006051       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:27:37.006149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 09:27:38.205768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:27:38.205877       1 metrics.go:72] Registering metrics
	I1026 09:27:38.205973       1 controller.go:711] "Syncing nftables rules"
	I1026 09:27:46.999887       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:27:46.999974       1 main.go:301] handling current node
	I1026 09:27:57.000027       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 09:27:57.000064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c4cef10c093e047656711f6ddd43f45e451b4234b38559cf8799fd096a53eda3] <==
	I1026 09:27:06.012535       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 09:27:06.026865       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 09:27:06.028329       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:27:06.028778       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:27:06.029008       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:27:06.029058       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:27:06.029368       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:27:06.029467       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:27:06.030019       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:27:06.056143       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:27:06.056174       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:27:06.056212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:27:06.056232       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:27:06.064376       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1026 09:27:06.114496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:27:06.518807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:27:06.939423       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:27:07.052362       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:27:07.154450       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:27:07.194579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:27:07.368859       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.154.122"}
	I1026 09:27:07.389526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.219.206"}
	I1026 09:27:09.446042       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:27:09.548305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:27:09.596002       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c9cbd9d3e4cfa1cf00ca6b7ab613ad7c0bbc25320fa33f24966b346c5cfee930] <==
	I1026 09:27:09.107820       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:27:09.107952       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:27:09.108042       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-204381"
	I1026 09:27:09.108127       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:27:09.109957       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:27:09.112450       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:27:09.114861       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 09:27:09.118278       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:27:09.120542       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:27:09.124969       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:27:09.127225       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:27:09.133414       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:27:09.135786       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 09:27:09.136521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:09.138820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:27:09.140068       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:27:09.140076       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:27:09.140128       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:09.140366       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:27:09.140400       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:27:09.140146       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:27:09.140159       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 09:27:09.147567       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:27:09.160719       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:27:09.163983       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [c589218ba65cfb5f8058e769abaec08033e797aa266d515739ceea95a26adbb3] <==
	I1026 09:27:07.064408       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:27:07.295618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:27:07.408422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:27:07.408494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:27:07.408560       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:27:07.449101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:27:07.449250       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:27:07.463866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:27:07.464281       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:27:07.464337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:07.465525       1 config.go:200] "Starting service config controller"
	I1026 09:27:07.465614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:27:07.465657       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:27:07.465684       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:27:07.465730       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:27:07.465762       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:27:07.466421       1 config.go:309] "Starting node config controller"
	I1026 09:27:07.466475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:27:07.466504       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:27:07.566944       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:27:07.567010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:27:07.567035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d4d7f74617d8d427b2faab1b3c5e48bbbae37682e6b48f8e1d3141a76e4a4b45] <==
	I1026 09:27:05.852770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:05.866804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:27:05.867003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:05.867020       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:05.867035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 09:27:05.911269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:27:05.911356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:27:05.911410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:27:05.911465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:27:05.911512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:27:05.911559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 09:27:05.911620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:27:05.911668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:27:05.911722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:27:05.911768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:27:05.911815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:27:05.911900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:27:05.911952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:27:05.911991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:27:05.912027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:27:05.912067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:27:05.912117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:27:05.912162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:27:05.912325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:27:07.568019       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:27:06 embed-certs-204381 kubelet[774]: W1026 09:27:06.391113     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228 WatchSource:0}: Error finding container fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228: Status 404 returned error can't find the container with id fd73e3b158d380f1afdc489e760e602468ea722ef1f405e4b2159a1e02e33228
	Oct 26 09:27:06 embed-certs-204381 kubelet[774]: W1026 09:27:06.480393     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463 WatchSource:0}: Error finding container d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463: Status 404 returned error can't find the container with id d8e1129e7c41f6ad67588f475b0fc578e128da3e31d63dd4527c6ada29cc5463
	Oct 26 09:27:08 embed-certs-204381 kubelet[774]: I1026 09:27:08.569872     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840615     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4f0befad-4bc4-4a9c-9792-d5cffe2c2666-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9krqn\" (UID: \"4f0befad-4bc4-4a9c-9792-d5cffe2c2666\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840676     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1d408d28-04be-46eb-9ff5-f6ecf8801b89-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5ff88\" (UID: \"1d408d28-04be-46eb-9ff5-f6ecf8801b89\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840698     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mr52\" (UniqueName: \"kubernetes.io/projected/1d408d28-04be-46eb-9ff5-f6ecf8801b89-kube-api-access-6mr52\") pod \"kubernetes-dashboard-855c9754f9-5ff88\" (UID: \"1d408d28-04be-46eb-9ff5-f6ecf8801b89\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88"
	Oct 26 09:27:09 embed-certs-204381 kubelet[774]: I1026 09:27:09.840723     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbx57\" (UniqueName: \"kubernetes.io/projected/4f0befad-4bc4-4a9c-9792-d5cffe2c2666-kube-api-access-hbx57\") pod \"dashboard-metrics-scraper-6ffb444bf9-9krqn\" (UID: \"4f0befad-4bc4-4a9c-9792-d5cffe2c2666\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn"
	Oct 26 09:27:10 embed-certs-204381 kubelet[774]: W1026 09:27:10.094307     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fbf6b6fb12ea5f35a5cc3361862936e680e81968432f6dc88533ed1fa06722ab/crio-36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2 WatchSource:0}: Error finding container 36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2: Status 404 returned error can't find the container with id 36aa05901fbbf05e14e16b7b88daabe5b4c3f27923e0b14147104b21f453ffc2
	Oct 26 09:27:16 embed-certs-204381 kubelet[774]: I1026 09:27:16.234405     774 scope.go:117] "RemoveContainer" containerID="55c56d5884832176441c470ef85ae10a68cb4b165fa98600963aa42733a787f7"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: I1026 09:27:17.247773     774 scope.go:117] "RemoveContainer" containerID="55c56d5884832176441c470ef85ae10a68cb4b165fa98600963aa42733a787f7"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: I1026 09:27:17.248857     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:17 embed-certs-204381 kubelet[774]: E1026 09:27:17.249117     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:20 embed-certs-204381 kubelet[774]: I1026 09:27:20.214545     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:20 embed-certs-204381 kubelet[774]: E1026 09:27:20.214771     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:22 embed-certs-204381 kubelet[774]: I1026 09:27:22.318244     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ff88" podStartSLOduration=1.884338778 podStartE2EDuration="13.318227455s" podCreationTimestamp="2025-10-26 09:27:09 +0000 UTC" firstStartedPulling="2025-10-26 09:27:10.136233756 +0000 UTC m=+9.385309098" lastFinishedPulling="2025-10-26 09:27:21.570122433 +0000 UTC m=+20.819197775" observedRunningTime="2025-10-26 09:27:22.317812919 +0000 UTC m=+21.566888269" watchObservedRunningTime="2025-10-26 09:27:22.318227455 +0000 UTC m=+21.567302805"
	Oct 26 09:27:32 embed-certs-204381 kubelet[774]: I1026 09:27:32.114033     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:32 embed-certs-204381 kubelet[774]: I1026 09:27:32.289510     774 scope.go:117] "RemoveContainer" containerID="2bdb949ce741b225a6667091eb7b4d7014d719d1e5eb95c9f0ae51aade65be49"
	Oct 26 09:27:33 embed-certs-204381 kubelet[774]: I1026 09:27:33.293070     774 scope.go:117] "RemoveContainer" containerID="0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	Oct 26 09:27:33 embed-certs-204381 kubelet[774]: E1026 09:27:33.293229     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:37 embed-certs-204381 kubelet[774]: I1026 09:27:37.304491     774 scope.go:117] "RemoveContainer" containerID="1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247"
	Oct 26 09:27:40 embed-certs-204381 kubelet[774]: I1026 09:27:40.214135     774 scope.go:117] "RemoveContainer" containerID="0f031cb298d46e613f0b6222282cc4ed0e2bdf7189a55cfff47cfc47490ccb82"
	Oct 26 09:27:40 embed-certs-204381 kubelet[774]: E1026 09:27:40.214891     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9krqn_kubernetes-dashboard(4f0befad-4bc4-4a9c-9792-d5cffe2c2666)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9krqn" podUID="4f0befad-4bc4-4a9c-9792-d5cffe2c2666"
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:27:52 embed-certs-204381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ac441bded3b54ada4b84416b407f28fd84714df732c42deea0ac4709a5553635] <==
	2025/10/26 09:27:21 Using namespace: kubernetes-dashboard
	2025/10/26 09:27:21 Using in-cluster config to connect to apiserver
	2025/10/26 09:27:21 Using secret token for csrf signing
	2025/10/26 09:27:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:27:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:27:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:27:21 Generating JWE encryption key
	2025/10/26 09:27:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:27:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:27:22 Initializing JWE encryption key from synchronized object
	2025/10/26 09:27:22 Creating in-cluster Sidecar client
	2025/10/26 09:27:22 Serving insecurely on HTTP port: 9090
	2025/10/26 09:27:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:21 Starting overwatch
	
	
	==> storage-provisioner [1f45d5982e892d775a901de972579c275d0fe4b083c8cc3e537ec1135d56f247] <==
	I1026 09:27:06.712377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:27:36.717097       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9c9ac3d30f1362c6834537d80c353977d95805f5f97277a733cca99a4899e5b4] <==
	I1026 09:27:37.398343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:27:37.423887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:27:37.426286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:27:37.432447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:40.887162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:45.148970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:48.747049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:51.800192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.822892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.828524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:27:54.828687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:27:54.831097       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08!
	I1026 09:27:54.840164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff3b783c-a30e-49f8-b18c-92455e17892c", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08 became leader
	W1026 09:27:54.841786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:54.851672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:27:54.932582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-204381_4d52abfd-87fd-4ba1-9871-21a43032fa08!
	W1026 09:27:56.858450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:27:56.865729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204381 -n embed-certs-204381
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204381 -n embed-certs-204381: exit status 2 (391.348009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-204381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-491604 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-491604 --alsologtostderr -v=1: exit status 80 (2.346218515s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-491604 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:28:17.697595  510904 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:17.698030  510904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:17.698092  510904 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:17.698111  510904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:17.698422  510904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:17.698929  510904 out.go:368] Setting JSON to false
	I1026 09:28:17.698984  510904 mustload.go:65] Loading cluster: no-preload-491604
	I1026 09:28:17.699402  510904 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:17.700019  510904 cli_runner.go:164] Run: docker container inspect no-preload-491604 --format={{.State.Status}}
	I1026 09:28:17.720330  510904 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:28:17.720750  510904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:17.810696  510904 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-26 09:28:17.800994588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:17.811432  510904 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-491604 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:28:17.815318  510904 out.go:179] * Pausing node no-preload-491604 ... 
	I1026 09:28:17.818349  510904 host.go:66] Checking if "no-preload-491604" exists ...
	I1026 09:28:17.818683  510904 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:17.818793  510904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-491604
	I1026 09:28:17.844542  510904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/no-preload-491604/id_rsa Username:docker}
	I1026 09:28:17.958978  510904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:28:17.981905  510904 pause.go:52] kubelet running: true
	I1026 09:28:17.981975  510904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:28:18.294260  510904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:28:18.294353  510904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:28:18.393202  510904 cri.go:89] found id: "8da19ca23d0c5adbccbdd3a7e174a027217fe43e5513e87036c1f2214619818a"
	I1026 09:28:18.393228  510904 cri.go:89] found id: "f748b54f0c957727dc6734b5f001264bcaaa3216f63bcf86d3463da0c8757dd9"
	I1026 09:28:18.393234  510904 cri.go:89] found id: "c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f"
	I1026 09:28:18.393238  510904 cri.go:89] found id: "1b92235864cad4c9d08a369f04045fb50159db67c04f870fa045d26a1a364397"
	I1026 09:28:18.393241  510904 cri.go:89] found id: "0e1f3ecf7a18ed6b903c497328088b59492d3347ea87dbcf4e7ac422e8ec654b"
	I1026 09:28:18.393249  510904 cri.go:89] found id: "4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e"
	I1026 09:28:18.393253  510904 cri.go:89] found id: "23d945999b91d55ecc1428312d8093f362e2eec0dc5f7df30a9d6f75b0350ff5"
	I1026 09:28:18.393257  510904 cri.go:89] found id: "69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b"
	I1026 09:28:18.393260  510904 cri.go:89] found id: "4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81"
	I1026 09:28:18.393266  510904 cri.go:89] found id: "2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618"
	I1026 09:28:18.393270  510904 cri.go:89] found id: "edf0a0964a7138fd0ed9bdee8fc158b483d259210ac09da89a80268c5f916cb1"
	I1026 09:28:18.393274  510904 cri.go:89] found id: ""
	I1026 09:28:18.393320  510904 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:28:18.404727  510904 retry.go:31] will retry after 367.084884ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:28:18Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:28:18.772142  510904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:28:18.789330  510904 pause.go:52] kubelet running: false
	I1026 09:28:18.789454  510904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:28:19.009743  510904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:28:19.009874  510904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:28:19.169076  510904 cri.go:89] found id: "8da19ca23d0c5adbccbdd3a7e174a027217fe43e5513e87036c1f2214619818a"
	I1026 09:28:19.169147  510904 cri.go:89] found id: "f748b54f0c957727dc6734b5f001264bcaaa3216f63bcf86d3463da0c8757dd9"
	I1026 09:28:19.169167  510904 cri.go:89] found id: "c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f"
	I1026 09:28:19.169187  510904 cri.go:89] found id: "1b92235864cad4c9d08a369f04045fb50159db67c04f870fa045d26a1a364397"
	I1026 09:28:19.169223  510904 cri.go:89] found id: "0e1f3ecf7a18ed6b903c497328088b59492d3347ea87dbcf4e7ac422e8ec654b"
	I1026 09:28:19.169244  510904 cri.go:89] found id: "4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e"
	I1026 09:28:19.169268  510904 cri.go:89] found id: "23d945999b91d55ecc1428312d8093f362e2eec0dc5f7df30a9d6f75b0350ff5"
	I1026 09:28:19.169286  510904 cri.go:89] found id: "69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b"
	I1026 09:28:19.169315  510904 cri.go:89] found id: "4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81"
	I1026 09:28:19.169346  510904 cri.go:89] found id: "2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618"
	I1026 09:28:19.169364  510904 cri.go:89] found id: "edf0a0964a7138fd0ed9bdee8fc158b483d259210ac09da89a80268c5f916cb1"
	I1026 09:28:19.169397  510904 cri.go:89] found id: ""
	I1026 09:28:19.169485  510904 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:28:19.180552  510904 retry.go:31] will retry after 361.86551ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:28:19Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:28:19.543181  510904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:28:19.557391  510904 pause.go:52] kubelet running: false
	I1026 09:28:19.557457  510904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:28:19.787552  510904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:28:19.787640  510904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:28:19.921755  510904 cri.go:89] found id: "8da19ca23d0c5adbccbdd3a7e174a027217fe43e5513e87036c1f2214619818a"
	I1026 09:28:19.921779  510904 cri.go:89] found id: "f748b54f0c957727dc6734b5f001264bcaaa3216f63bcf86d3463da0c8757dd9"
	I1026 09:28:19.921784  510904 cri.go:89] found id: "c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f"
	I1026 09:28:19.921788  510904 cri.go:89] found id: "1b92235864cad4c9d08a369f04045fb50159db67c04f870fa045d26a1a364397"
	I1026 09:28:19.921791  510904 cri.go:89] found id: "0e1f3ecf7a18ed6b903c497328088b59492d3347ea87dbcf4e7ac422e8ec654b"
	I1026 09:28:19.921795  510904 cri.go:89] found id: "4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e"
	I1026 09:28:19.921798  510904 cri.go:89] found id: "23d945999b91d55ecc1428312d8093f362e2eec0dc5f7df30a9d6f75b0350ff5"
	I1026 09:28:19.921801  510904 cri.go:89] found id: "69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b"
	I1026 09:28:19.921805  510904 cri.go:89] found id: "4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81"
	I1026 09:28:19.921810  510904 cri.go:89] found id: "2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618"
	I1026 09:28:19.921814  510904 cri.go:89] found id: "edf0a0964a7138fd0ed9bdee8fc158b483d259210ac09da89a80268c5f916cb1"
	I1026 09:28:19.921817  510904 cri.go:89] found id: ""
	I1026 09:28:19.921864  510904 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:28:19.945778  510904 out.go:203] 
	W1026 09:28:19.948723  510904 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:28:19.948747  510904 out.go:285] * 
	* 
	W1026 09:28:19.956217  510904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:28:19.959101  510904 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-491604 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-491604
helpers_test.go:243: (dbg) docker inspect no-preload-491604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	        "Created": "2025-10-26T09:25:37.402820807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:27:12.361656814Z",
	            "FinishedAt": "2025-10-26T09:27:11.212699105Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hosts",
	        "LogPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db-json.log",
	        "Name": "/no-preload-491604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	                "LowerDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-491604",
	                "Source": "/var/lib/docker/volumes/no-preload-491604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491604",
	                "name.minikube.sigs.k8s.io": "no-preload-491604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "268f3da30a5a0b487acb5ae9c2c986c385f7fe32a4df8de085eac12a23adc50e",
	            "SandboxKey": "/var/run/docker/netns/268f3da30a5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:ff:29:32:91:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b3fac8619483e027c0a41271c69d710c2df0c76a965d01b990e19e9b1b9a2bd",
	                    "EndpointID": "8a6f62027060f7a8b09eddc35ee54bc0540301dd43eff204555f4c32ad90f3a2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491604",
	                        "0b11d1185923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604: exit status 2 (420.592338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-491604 logs -n 25: (1.520995661s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                                                                                               │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                                                                                                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ image   │ no-preload-491604 image list --format=json                                                                                                                                                                                                    │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ pause   │ -p no-preload-491604 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:28:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:28:01.830762  509018 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:01.830899  509018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:01.830912  509018 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:01.830942  509018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:01.831231  509018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:01.831708  509018 out.go:368] Setting JSON to false
	I1026 09:28:01.832749  509018 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11432,"bootTime":1761459450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:28:01.832825  509018 start.go:141] virtualization:  
	I1026 09:28:01.837284  509018 out.go:179] * [newest-cni-596581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:28:01.840830  509018 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:28:01.840894  509018 notify.go:220] Checking for updates...
	I1026 09:28:01.844375  509018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:28:01.847664  509018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:01.850980  509018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:28:01.854166  509018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:28:01.857523  509018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:28:01.861253  509018 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:01.861361  509018 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:28:01.883976  509018 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:28:01.884107  509018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:01.960909  509018 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:28:01.950945564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:01.961020  509018 docker.go:318] overlay module found
	I1026 09:28:01.964449  509018 out.go:179] * Using the docker driver based on user configuration
	W1026 09:27:59.197694  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:28:01.203599  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	I1026 09:28:01.967512  509018 start.go:305] selected driver: docker
	I1026 09:28:01.967534  509018 start.go:925] validating driver "docker" against <nil>
	I1026 09:28:01.967550  509018 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:28:01.968572  509018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:02.029005  509018 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:28:02.019386779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:02.029166  509018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1026 09:28:02.029200  509018 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1026 09:28:02.029474  509018 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:28:02.034679  509018 out.go:179] * Using Docker driver with root privileges
	I1026 09:28:02.037583  509018 cni.go:84] Creating CNI manager for ""
	I1026 09:28:02.037664  509018 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:02.037679  509018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:28:02.037766  509018 start.go:349] cluster config:
	{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:02.040684  509018 out.go:179] * Starting "newest-cni-596581" primary control-plane node in "newest-cni-596581" cluster
	I1026 09:28:02.043555  509018 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:28:02.046605  509018 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:28:02.049534  509018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:28:02.049777  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:02.049822  509018 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:28:02.049835  509018 cache.go:58] Caching tarball of preloaded images
	I1026 09:28:02.049915  509018 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:28:02.049931  509018 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:28:02.050051  509018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:02.050075  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json: {Name:mk2b831f8010d61bca881e6ec71ff69080e491b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:02.070439  509018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:28:02.070463  509018 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:28:02.070482  509018 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:28:02.070506  509018 start.go:360] acquireMachinesLock for newest-cni-596581: {Name:mk457b41350c6ab0aead81b63943ef6522def4bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:28:02.070616  509018 start.go:364] duration metric: took 90.693µs to acquireMachinesLock for "newest-cni-596581"
	I1026 09:28:02.070650  509018 start.go:93] Provisioning new machine with config: &{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:28:02.070760  509018 start.go:125] createHost starting for "" (driver="docker")
	W1026 09:28:03.695922  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	I1026 09:28:04.196752  505287 pod_ready.go:94] pod "coredns-66bc5c9577-2rq75" is "Ready"
	I1026 09:28:04.196834  505287 pod_ready.go:86] duration metric: took 34.506703893s for pod "coredns-66bc5c9577-2rq75" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.200010  505287 pod_ready.go:83] waiting for pod "etcd-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.205630  505287 pod_ready.go:94] pod "etcd-no-preload-491604" is "Ready"
	I1026 09:28:04.205667  505287 pod_ready.go:86] duration metric: took 5.633297ms for pod "etcd-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.208346  505287 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.213769  505287 pod_ready.go:94] pod "kube-apiserver-no-preload-491604" is "Ready"
	I1026 09:28:04.213851  505287 pod_ready.go:86] duration metric: took 5.475748ms for pod "kube-apiserver-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.216793  505287 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.394195  505287 pod_ready.go:94] pod "kube-controller-manager-no-preload-491604" is "Ready"
	I1026 09:28:04.394272  505287 pod_ready.go:86] duration metric: took 177.408835ms for pod "kube-controller-manager-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.593889  505287 pod_ready.go:83] waiting for pod "kube-proxy-tpv97" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.993545  505287 pod_ready.go:94] pod "kube-proxy-tpv97" is "Ready"
	I1026 09:28:04.993578  505287 pod_ready.go:86] duration metric: took 399.611168ms for pod "kube-proxy-tpv97" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.194021  505287 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.593795  505287 pod_ready.go:94] pod "kube-scheduler-no-preload-491604" is "Ready"
	I1026 09:28:05.593874  505287 pod_ready.go:86] duration metric: took 399.82873ms for pod "kube-scheduler-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.593918  505287 pod_ready.go:40] duration metric: took 35.907136054s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:28:05.664159  505287 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:28:05.680375  505287 out.go:179] * Done! kubectl is now configured to use "no-preload-491604" cluster and "default" namespace by default
	I1026 09:28:02.074219  509018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:28:02.074487  509018 start.go:159] libmachine.API.Create for "newest-cni-596581" (driver="docker")
	I1026 09:28:02.074536  509018 client.go:168] LocalClient.Create starting
	I1026 09:28:02.074620  509018 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:28:02.074661  509018 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:02.074675  509018 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:02.074771  509018 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:28:02.074801  509018 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:02.074820  509018 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:02.075213  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:28:02.091834  509018 cli_runner.go:211] docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:28:02.091937  509018 network_create.go:284] running [docker network inspect newest-cni-596581] to gather additional debugging logs...
	I1026 09:28:02.091961  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581
	W1026 09:28:02.107915  509018 cli_runner.go:211] docker network inspect newest-cni-596581 returned with exit code 1
	I1026 09:28:02.107947  509018 network_create.go:287] error running [docker network inspect newest-cni-596581]: docker network inspect newest-cni-596581: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-596581 not found
	I1026 09:28:02.107960  509018 network_create.go:289] output of [docker network inspect newest-cni-596581]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-596581 not found
	
	** /stderr **
	I1026 09:28:02.108061  509018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:02.124770  509018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:28:02.125160  509018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:28:02.125424  509018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:28:02.125868  509018 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fb2d0}
	I1026 09:28:02.125894  509018 network_create.go:124] attempt to create docker network newest-cni-596581 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 09:28:02.125951  509018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-596581 newest-cni-596581
	I1026 09:28:02.184527  509018 network_create.go:108] docker network newest-cni-596581 192.168.76.0/24 created
	I1026 09:28:02.184566  509018 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-596581" container
	I1026 09:28:02.184657  509018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:28:02.203585  509018 cli_runner.go:164] Run: docker volume create newest-cni-596581 --label name.minikube.sigs.k8s.io=newest-cni-596581 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:28:02.222226  509018 oci.go:103] Successfully created a docker volume newest-cni-596581
	I1026 09:28:02.222318  509018 cli_runner.go:164] Run: docker run --rm --name newest-cni-596581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-596581 --entrypoint /usr/bin/test -v newest-cni-596581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:28:02.773958  509018 oci.go:107] Successfully prepared a docker volume newest-cni-596581
	I1026 09:28:02.774004  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:02.774024  509018 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:28:02.774125  509018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-596581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 09:28:07.956660  509018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-596581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.182488727s)
	I1026 09:28:07.956696  509018 kic.go:203] duration metric: took 5.18266724s to extract preloaded images to volume ...
	W1026 09:28:07.956839  509018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:28:07.956950  509018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:28:08.022491  509018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-596581 --name newest-cni-596581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-596581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-596581 --network newest-cni-596581 --ip 192.168.76.2 --volume newest-cni-596581:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:28:08.359052  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Running}}
	I1026 09:28:08.380707  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.404793  509018 cli_runner.go:164] Run: docker exec newest-cni-596581 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:28:08.466772  509018 oci.go:144] the created container "newest-cni-596581" has a running status.
	I1026 09:28:08.466805  509018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa...
	I1026 09:28:08.811623  509018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:28:08.848933  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.867583  509018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:28:08.867602  509018 kic_runner.go:114] Args: [docker exec --privileged newest-cni-596581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:28:08.934672  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.962985  509018 machine.go:93] provisionDockerMachine start ...
	I1026 09:28:08.963091  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:08.988493  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:08.988829  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:08.988838  509018 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:28:08.989397  509018 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37310->127.0.0.1:33460: read: connection reset by peer
	I1026 09:28:12.142530  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:12.142556  509018 ubuntu.go:182] provisioning hostname "newest-cni-596581"
	I1026 09:28:12.142630  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.160146  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.160462  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.160479  509018 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-596581 && echo "newest-cni-596581" | sudo tee /etc/hostname
	I1026 09:28:12.320083  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:12.320170  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.337550  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.337864  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.337882  509018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-596581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-596581/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-596581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:28:12.491432  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:28:12.491462  509018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:28:12.491504  509018 ubuntu.go:190] setting up certificates
	I1026 09:28:12.491513  509018 provision.go:84] configureAuth start
	I1026 09:28:12.491574  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:12.512652  509018 provision.go:143] copyHostCerts
	I1026 09:28:12.512727  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:28:12.512740  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:28:12.512821  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:28:12.512926  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:28:12.512938  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:28:12.512973  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:28:12.513043  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:28:12.513054  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:28:12.513085  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:28:12.513138  509018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.newest-cni-596581 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-596581]
	I1026 09:28:12.704716  509018 provision.go:177] copyRemoteCerts
	I1026 09:28:12.704779  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:28:12.704820  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.723076  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:12.830756  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:28:12.849498  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:28:12.868018  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:28:12.885702  509018 provision.go:87] duration metric: took 394.166332ms to configureAuth
	I1026 09:28:12.885733  509018 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:28:12.885923  509018 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:12.886031  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.902486  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.902852  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.902876  509018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:28:13.261231  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:28:13.261257  509018 machine.go:96] duration metric: took 4.29825198s to provisionDockerMachine
	I1026 09:28:13.261267  509018 client.go:171] duration metric: took 11.186719226s to LocalClient.Create
	I1026 09:28:13.261329  509018 start.go:167] duration metric: took 11.186842755s to libmachine.API.Create "newest-cni-596581"
	I1026 09:28:13.261340  509018 start.go:293] postStartSetup for "newest-cni-596581" (driver="docker")
	I1026 09:28:13.261371  509018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:28:13.261473  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:28:13.261547  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.279532  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.382786  509018 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:28:13.386264  509018 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:28:13.386295  509018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:28:13.386307  509018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:28:13.386361  509018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:28:13.386448  509018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:28:13.386564  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:28:13.393998  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:13.416886  509018 start.go:296] duration metric: took 155.531447ms for postStartSetup
	I1026 09:28:13.417263  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:13.433603  509018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:13.433905  509018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:28:13.433947  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.454434  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.563905  509018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:28:13.572218  509018 start.go:128] duration metric: took 11.501441942s to createHost
	I1026 09:28:13.572247  509018 start.go:83] releasing machines lock for "newest-cni-596581", held for 11.501616541s
	I1026 09:28:13.572325  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:13.590143  509018 ssh_runner.go:195] Run: cat /version.json
	I1026 09:28:13.590157  509018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:28:13.590196  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.590223  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.610565  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.612088  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.813459  509018 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:13.820261  509018 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:28:13.857324  509018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:28:13.861447  509018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:28:13.861563  509018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:28:13.894289  509018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:28:13.894356  509018 start.go:495] detecting cgroup driver to use...
	I1026 09:28:13.894414  509018 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:28:13.894504  509018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:28:13.912686  509018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:28:13.925152  509018 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:28:13.925267  509018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:28:13.941469  509018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:28:13.960631  509018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:28:14.101634  509018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:28:14.222840  509018 docker.go:234] disabling docker service ...
	I1026 09:28:14.222916  509018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:28:14.245768  509018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:28:14.259722  509018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:28:14.385640  509018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:28:14.515275  509018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:28:14.529214  509018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:28:14.544661  509018 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:28:14.544755  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.554154  509018 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:28:14.554284  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.563221  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.572088  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.580963  509018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:28:14.589250  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.599794  509018 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.613216  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.622281  509018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:28:14.630015  509018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:28:14.637122  509018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:14.754795  509018 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:28:14.894875  509018 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:28:14.894998  509018 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:28:14.899011  509018 start.go:563] Will wait 60s for crictl version
	I1026 09:28:14.899156  509018 ssh_runner.go:195] Run: which crictl
	I1026 09:28:14.902605  509018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:28:14.927266  509018 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:28:14.927430  509018 ssh_runner.go:195] Run: crio --version
	I1026 09:28:14.957131  509018 ssh_runner.go:195] Run: crio --version
	I1026 09:28:15.002153  509018 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:28:15.006700  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:15.045471  509018 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:28:15.051564  509018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:15.066543  509018 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 09:28:15.069286  509018 kubeadm.go:883] updating cluster {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:28:15.069420  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:15.069526  509018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:15.108461  509018 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:15.108490  509018 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:28:15.108556  509018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:15.137014  509018 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:15.137041  509018 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:28:15.137050  509018 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:28:15.137145  509018 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-596581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:28:15.137242  509018 ssh_runner.go:195] Run: crio config
	I1026 09:28:15.213856  509018 cni.go:84] Creating CNI manager for ""
	I1026 09:28:15.213928  509018 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:15.213962  509018 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 09:28:15.214018  509018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-596581 NodeName:newest-cni-596581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:28:15.214170  509018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-596581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:28:15.214274  509018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:28:15.222315  509018 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:28:15.222395  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:28:15.231527  509018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:28:15.244778  509018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:28:15.258555  509018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 09:28:15.271818  509018 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:28:15.275540  509018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:15.285934  509018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:15.413187  509018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:28:15.435192  509018 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581 for IP: 192.168.76.2
	I1026 09:28:15.435217  509018 certs.go:195] generating shared ca certs ...
	I1026 09:28:15.435245  509018 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:15.435427  509018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:28:15.435495  509018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:28:15.435503  509018 certs.go:257] generating profile certs ...
	I1026 09:28:15.435573  509018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key
	I1026 09:28:15.435590  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt with IP's: []
	I1026 09:28:16.396253  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt ...
	I1026 09:28:16.396287  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt: {Name:mk465c263d6ab4eff71cf55e9387547ab875e0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:16.396578  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key ...
	I1026 09:28:16.396603  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key: {Name:mk484ba41ce437341d8cd7d53fc4fe8c6b66c775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:16.396717  509018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff
	I1026 09:28:16.396738  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 09:28:17.062992  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff ...
	I1026 09:28:17.063024  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff: {Name:mk8e7bcb75abe02451d11a4535afae3d3a3bd8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.063221  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff ...
	I1026 09:28:17.063238  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff: {Name:mk5b83bd4b179ab95c8033309350f7a05b40124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.063322  509018 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt
	I1026 09:28:17.063400  509018 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key
	I1026 09:28:17.063460  509018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key
	I1026 09:28:17.063478  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt with IP's: []
	I1026 09:28:17.426830  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt ...
	I1026 09:28:17.426861  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt: {Name:mk830b958cac374d7ae048dd38d3101fb9f790db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.427055  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key ...
	I1026 09:28:17.427070  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key: {Name:mk19586ee06b3298821055295ec68883dd1992bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.427267  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:28:17.427309  509018 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:28:17.427324  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:28:17.427349  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:28:17.427376  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:28:17.427401  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:28:17.427443  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:17.428087  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:28:17.446706  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:28:17.467459  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:28:17.487504  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:28:17.511849  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:28:17.533966  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:28:17.553708  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:28:17.576455  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:28:17.607223  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:28:17.641762  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:28:17.661072  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:28:17.689286  509018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:28:17.704421  509018 ssh_runner.go:195] Run: openssl version
	I1026 09:28:17.711114  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:28:17.721186  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.726875  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.727060  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.800349  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:28:17.824637  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:28:17.838938  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.843053  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.843123  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.892676  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:28:17.902470  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:28:17.911041  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.914599  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.914672  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.956874  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:28:17.966730  509018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:28:17.971513  509018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:28:17.971562  509018 kubeadm.go:400] StartCluster: {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:17.971640  509018 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:28:17.971702  509018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:28:18.006415  509018 cri.go:89] found id: ""
	I1026 09:28:18.006496  509018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:28:18.017518  509018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:28:18.034934  509018 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:28:18.035000  509018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:28:18.048903  509018 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:28:18.048985  509018 kubeadm.go:157] found existing configuration files:
	
	I1026 09:28:18.049078  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:28:18.056885  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:28:18.056951  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:28:18.069856  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:28:18.083216  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:28:18.083277  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:28:18.092581  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:28:18.101677  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:28:18.101749  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:28:18.110396  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:28:18.120037  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:28:18.120104  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:28:18.128378  509018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:28:18.176805  509018 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:28:18.177167  509018 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:28:18.220701  509018 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:28:18.221016  509018 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:28:18.221063  509018 kubeadm.go:318] OS: Linux
	I1026 09:28:18.221119  509018 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:28:18.221175  509018 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:28:18.221230  509018 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:28:18.221283  509018 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:28:18.221337  509018 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:28:18.221391  509018 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:28:18.221442  509018 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:28:18.221497  509018 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:28:18.221549  509018 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:28:18.313433  509018 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:28:18.313555  509018 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:28:18.313719  509018 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:28:18.331389  509018 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.217519164Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221606979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221638635Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221658131Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225576713Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225628332Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225652817Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.233436082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.233492837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.23351806Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.241769746Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.241812429Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.359571493Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=54ecf080-389d-474d-938b-44e1e4a6d642 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.361158599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fa8c70b-5d3c-401b-898c-91f31b0e30bd name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.36211871Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=cdcb2cab-cb2d-41e9-81cc-285b65c932b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.36223218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.373326594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.375413522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.414376682Z" level=info msg="Created container 2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=cdcb2cab-cb2d-41e9-81cc-285b65c932b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.415509103Z" level=info msg="Starting container: 2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618" id=d94c2837-61f0-4443-9db8-bd42a02d39b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.421647588Z" level=info msg="Started container" PID=1729 containerID=2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper id=d94c2837-61f0-4443-9db8-bd42a02d39b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039
	Oct 26 09:28:16 no-preload-491604 conmon[1727]: conmon 2fc15d4bd85eea90ae8b <ninfo>: container 1729 exited with status 1
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.6767807Z" level=info msg="Removing container: cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.688304625Z" level=info msg="Error loading conmon cgroup of container cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967: cgroup deleted" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.691998794Z" level=info msg="Removed container cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2fc15d4bd85ee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   bebe705b6ff17       dashboard-metrics-scraper-6ffb444bf9-dvz8t   kubernetes-dashboard
	8da19ca23d0c5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   93e128fdd51ac       storage-provisioner                          kube-system
	edf0a0964a713       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   08f09afd10753       kubernetes-dashboard-855c9754f9-7ljxx        kubernetes-dashboard
	f748b54f0c957       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   50c8d5a6aaa71       coredns-66bc5c9577-2rq75                     kube-system
	d51da6dc1c75f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   b01cdc7ca5b48       busybox                                      default
	c36c847447488       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   93e128fdd51ac       storage-provisioner                          kube-system
	1b92235864cad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   5076854c63372       kindnet-4g8pl                                kube-system
	0e1f3ecf7a18e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   3e7cd333d2ccb       kube-proxy-tpv97                             kube-system
	4ccfa38d7bc4a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   f0bcb0e1202ad       kube-apiserver-no-preload-491604             kube-system
	23d945999b91d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   6f005a7174981       kube-controller-manager-no-preload-491604    kube-system
	69cf58b8f57ce       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   24ab742aa9ee6       kube-scheduler-no-preload-491604             kube-system
	4df7dd9514509       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   fef92a9b54a8b       etcd-no-preload-491604                       kube-system
	
	
	==> coredns [f748b54f0c957727dc6734b5f001264bcaaa3216f63bcf86d3463da0c8757dd9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56382 - 9828 "HINFO IN 5968450186337636727.7609667540600921879. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035366673s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-491604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-491604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-491604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_26_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-491604
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:28:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-491604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d17013c2-3271-42c0-8ce8-feb077b52c71
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-2rq75                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-491604                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-4g8pl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-491604              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-491604     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-tpv97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-491604              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dvz8t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7ljxx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 111s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Normal   Starting                 2m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-491604 event: Registered Node no-preload-491604 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-491604 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-491604 event: Registered Node no-preload-491604 in Controller
	
	
	==> dmesg <==
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81] <==
	{"level":"warn","ts":"2025-10-26T09:27:25.664220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.683808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.719316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.733388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.757316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.770183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.792237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.814070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.825425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.851691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.880474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.891856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.910604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.926173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.956900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.965810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.987949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.003650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.025480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.040714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.065520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.095827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.111573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.127342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.231196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:28:21 up  3:10,  0 user,  load average: 2.64, 3.40, 2.98
	Linux no-preload-491604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b92235864cad4c9d08a369f04045fb50159db67c04f870fa045d26a1a364397] <==
	I1026 09:27:29.015106       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:27:29.015585       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:27:29.015799       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:27:29.015848       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:27:29.015892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:27:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:27:29.208856       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:27:29.209653       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:27:29.209739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:27:29.299234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:27:59.211455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:27:59.211532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:27:59.300016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:27:59.300193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 09:28:00.810122       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:28:00.810158       1 metrics.go:72] Registering metrics
	I1026 09:28:00.811006       1 controller.go:711] "Syncing nftables rules"
	I1026 09:28:09.209111       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:28:09.209178       1 main.go:301] handling current node
	I1026 09:28:19.209553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:28:19.209594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e] <==
	I1026 09:27:27.436857       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:27:27.475512       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 09:27:27.475595       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 09:27:27.475785       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 09:27:27.476033       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:27:27.476048       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:27:27.476136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:27:27.477056       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 09:27:27.477072       1 policy_source.go:240] refreshing policies
	I1026 09:27:27.492142       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:27:27.492302       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:27:27.503937       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:27:27.511644       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 09:27:27.549412       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:27:27.989087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:27:28.338945       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:27:28.350016       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:27:28.511072       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:27:28.775353       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:27:28.846455       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:27:29.071657       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.249.107"}
	I1026 09:27:29.091702       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.251.98"}
	I1026 09:27:31.012790       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:27:31.415746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:27:31.504679       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [23d945999b91d55ecc1428312d8093f362e2eec0dc5f7df30a9d6f75b0350ff5] <==
	I1026 09:27:30.969196       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 09:27:30.969202       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 09:27:30.969435       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:27:30.973626       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:27:30.974828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:27:30.974842       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:27:30.978109       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:27:30.980271       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:27:30.980433       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:27:30.980512       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-491604"
	I1026 09:27:30.980563       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:27:30.984882       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 09:27:30.986961       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:27:30.998547       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:27:30.998699       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:30.998887       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:27:30.998918       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:27:30.999002       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:27:30.998815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:27:31.000085       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:27:31.000387       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:27:31.008998       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:31.026900       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:27:31.031287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:27:31.033414       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [0e1f3ecf7a18ed6b903c497328088b59492d3347ea87dbcf4e7ac422e8ec654b] <==
	I1026 09:27:28.970575       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:27:29.103028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:27:29.213613       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:27:29.213720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:27:29.213856       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:27:29.232409       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:27:29.232457       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:27:29.235972       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:27:29.236298       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:27:29.236321       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:29.237445       1 config.go:200] "Starting service config controller"
	I1026 09:27:29.237507       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:27:29.237974       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:27:29.238071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:27:29.238168       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:27:29.238200       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:27:29.239033       1 config.go:309] "Starting node config controller"
	I1026 09:27:29.240363       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:27:29.245053       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:27:29.338294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:27:29.338336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:27:29.338564       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b] <==
	I1026 09:27:26.289788       1 serving.go:386] Generated self-signed cert in-memory
	I1026 09:27:28.097076       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:27:28.097142       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:28.130575       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 09:27:28.132056       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:27:28.132713       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 09:27:28.133385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.133404       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.133422       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:27:28.133454       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:27:28.133646       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:27:28.239851       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 09:27:28.242885       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.248227       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: I1026 09:27:31.741536     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8nm\" (UniqueName: \"kubernetes.io/projected/f81533b4-a61b-4898-9998-d631198e8d6b-kube-api-access-ds8nm\") pod \"kubernetes-dashboard-855c9754f9-7ljxx\" (UID: \"f81533b4-a61b-4898-9998-d631198e8d6b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7ljxx"
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: I1026 09:27:31.741558     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzdgm\" (UniqueName: \"kubernetes.io/projected/10b3e082-a6aa-4a89-ab0d-b6279ad647bd-kube-api-access-xzdgm\") pod \"dashboard-metrics-scraper-6ffb444bf9-dvz8t\" (UID: \"10b3e082-a6aa-4a89-ab0d-b6279ad647bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t"
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: W1026 09:27:31.961662     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039 WatchSource:0}: Error finding container bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039: Status 404 returned error can't find the container with id bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: W1026 09:27:31.979210     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe WatchSource:0}: Error finding container 08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe: Status 404 returned error can't find the container with id 08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe
	Oct 26 09:27:37 no-preload-491604 kubelet[773]: I1026 09:27:37.543282     773 scope.go:117] "RemoveContainer" containerID="fee2b8b2cd3fca3d533c5961b879840c535728341a86bee08e88fa21d5a19541"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: I1026 09:27:38.547989     773 scope.go:117] "RemoveContainer" containerID="fee2b8b2cd3fca3d533c5961b879840c535728341a86bee08e88fa21d5a19541"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: I1026 09:27:38.548354     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: E1026 09:27:38.548538     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:41 no-preload-491604 kubelet[773]: I1026 09:27:41.917542     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:41 no-preload-491604 kubelet[773]: E1026 09:27:41.917726     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.358684     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.591448     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.591740     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: E1026 09:27:53.591906     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.612775     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7ljxx" podStartSLOduration=13.048151517 podStartE2EDuration="22.6127519s" podCreationTimestamp="2025-10-26 09:27:31 +0000 UTC" firstStartedPulling="2025-10-26 09:27:31.983022481 +0000 UTC m=+11.023909842" lastFinishedPulling="2025-10-26 09:27:41.547622864 +0000 UTC m=+20.588510225" observedRunningTime="2025-10-26 09:27:42.575825013 +0000 UTC m=+21.616712391" watchObservedRunningTime="2025-10-26 09:27:53.6127519 +0000 UTC m=+32.653639261"
	Oct 26 09:27:59 no-preload-491604 kubelet[773]: I1026 09:27:59.609214     773 scope.go:117] "RemoveContainer" containerID="c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f"
	Oct 26 09:28:01 no-preload-491604 kubelet[773]: I1026 09:28:01.917615     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:01 no-preload-491604 kubelet[773]: E1026 09:28:01.918218     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.358561     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.665808     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.666559     773 scope.go:117] "RemoveContainer" containerID="2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: E1026 09:28:16.667981     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:28:18 no-preload-491604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:28:18 no-preload-491604 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:28:18 no-preload-491604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [edf0a0964a7138fd0ed9bdee8fc158b483d259210ac09da89a80268c5f916cb1] <==
	2025/10/26 09:27:41 Using namespace: kubernetes-dashboard
	2025/10/26 09:27:41 Using in-cluster config to connect to apiserver
	2025/10/26 09:27:41 Using secret token for csrf signing
	2025/10/26 09:27:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:27:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:27:41 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:27:41 Generating JWE encryption key
	2025/10/26 09:27:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:27:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:27:42 Initializing JWE encryption key from synchronized object
	2025/10/26 09:27:42 Creating in-cluster Sidecar client
	2025/10/26 09:27:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:42 Serving insecurely on HTTP port: 9090
	2025/10/26 09:28:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:41 Starting overwatch
	
	
	==> storage-provisioner [8da19ca23d0c5adbccbdd3a7e174a027217fe43e5513e87036c1f2214619818a] <==
	I1026 09:27:59.665804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:27:59.684009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:27:59.684064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:27:59.686207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:03.141574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:07.407993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:11.007855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:14.062119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.085190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.091960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:28:17.092123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:28:17.094624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c!
	I1026 09:28:17.102591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53d2d1f5-98c5-4ce4-af2a-3a4b0bc16b41", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c became leader
	W1026 09:28:17.104054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.135044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:28:17.195372       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c!
	W1026 09:28:19.138655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:19.147178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:21.150926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:21.159511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f] <==
	I1026 09:27:28.990982       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:27:58.997611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491604 -n no-preload-491604: exit status 2 (499.324971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-491604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-491604
helpers_test.go:243: (dbg) docker inspect no-preload-491604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	        "Created": "2025-10-26T09:25:37.402820807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:27:12.361656814Z",
	            "FinishedAt": "2025-10-26T09:27:11.212699105Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/hosts",
	        "LogPath": "/var/lib/docker/containers/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db-json.log",
	        "Name": "/no-preload-491604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-491604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-491604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db",
	                "LowerDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2f4097c3104fc26bf22407de082ee2d20352fd066db72a3f1a8bd15eb695b6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-491604",
	                "Source": "/var/lib/docker/volumes/no-preload-491604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-491604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-491604",
	                "name.minikube.sigs.k8s.io": "no-preload-491604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "268f3da30a5a0b487acb5ae9c2c986c385f7fe32a4df8de085eac12a23adc50e",
	            "SandboxKey": "/var/run/docker/netns/268f3da30a5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-491604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:ff:29:32:91:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b3fac8619483e027c0a41271c69d710c2df0c76a965d01b990e19e9b1b9a2bd",
	                    "EndpointID": "8a6f62027060f7a8b09eddc35ee54bc0540301dd43eff204555f4c32ad90f3a2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-491604",
	                        "0b11d1185923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604: exit status 2 (449.01228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-491604 logs -n 25: (1.608998655s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-167519 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │                     │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:24 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p old-k8s-version-167519                                                                                                                                                                                                                     │ old-k8s-version-167519       │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                                                                                               │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                                                                                                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ image   │ no-preload-491604 image list --format=json                                                                                                                                                                                                    │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ pause   │ -p no-preload-491604 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:28:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:28:01.830762  509018 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:01.830899  509018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:01.830912  509018 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:01.830942  509018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:01.831231  509018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:01.831708  509018 out.go:368] Setting JSON to false
	I1026 09:28:01.832749  509018 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11432,"bootTime":1761459450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:28:01.832825  509018 start.go:141] virtualization:  
	I1026 09:28:01.837284  509018 out.go:179] * [newest-cni-596581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:28:01.840830  509018 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:28:01.840894  509018 notify.go:220] Checking for updates...
	I1026 09:28:01.844375  509018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:28:01.847664  509018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:01.850980  509018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:28:01.854166  509018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:28:01.857523  509018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:28:01.861253  509018 config.go:182] Loaded profile config "no-preload-491604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:01.861361  509018 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:28:01.883976  509018 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:28:01.884107  509018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:01.960909  509018 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:28:01.950945564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:01.961020  509018 docker.go:318] overlay module found
	I1026 09:28:01.964449  509018 out.go:179] * Using the docker driver based on user configuration
	W1026 09:27:59.197694  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	W1026 09:28:01.203599  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	I1026 09:28:01.967512  509018 start.go:305] selected driver: docker
	I1026 09:28:01.967534  509018 start.go:925] validating driver "docker" against <nil>
	I1026 09:28:01.967550  509018 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:28:01.968572  509018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:02.029005  509018 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:28:02.019386779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:02.029166  509018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1026 09:28:02.029200  509018 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1026 09:28:02.029474  509018 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:28:02.034679  509018 out.go:179] * Using Docker driver with root privileges
	I1026 09:28:02.037583  509018 cni.go:84] Creating CNI manager for ""
	I1026 09:28:02.037664  509018 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:02.037679  509018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:28:02.037766  509018 start.go:349] cluster config:
	{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:02.040684  509018 out.go:179] * Starting "newest-cni-596581" primary control-plane node in "newest-cni-596581" cluster
	I1026 09:28:02.043555  509018 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:28:02.046605  509018 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:28:02.049534  509018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:28:02.049777  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:02.049822  509018 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:28:02.049835  509018 cache.go:58] Caching tarball of preloaded images
	I1026 09:28:02.049915  509018 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:28:02.049931  509018 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:28:02.050051  509018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:02.050075  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json: {Name:mk2b831f8010d61bca881e6ec71ff69080e491b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:02.070439  509018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:28:02.070463  509018 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:28:02.070482  509018 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:28:02.070506  509018 start.go:360] acquireMachinesLock for newest-cni-596581: {Name:mk457b41350c6ab0aead81b63943ef6522def4bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:28:02.070616  509018 start.go:364] duration metric: took 90.693µs to acquireMachinesLock for "newest-cni-596581"
	I1026 09:28:02.070650  509018 start.go:93] Provisioning new machine with config: &{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:28:02.070760  509018 start.go:125] createHost starting for "" (driver="docker")
	W1026 09:28:03.695922  505287 pod_ready.go:104] pod "coredns-66bc5c9577-2rq75" is not "Ready", error: <nil>
	I1026 09:28:04.196752  505287 pod_ready.go:94] pod "coredns-66bc5c9577-2rq75" is "Ready"
	I1026 09:28:04.196834  505287 pod_ready.go:86] duration metric: took 34.506703893s for pod "coredns-66bc5c9577-2rq75" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.200010  505287 pod_ready.go:83] waiting for pod "etcd-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.205630  505287 pod_ready.go:94] pod "etcd-no-preload-491604" is "Ready"
	I1026 09:28:04.205667  505287 pod_ready.go:86] duration metric: took 5.633297ms for pod "etcd-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.208346  505287 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.213769  505287 pod_ready.go:94] pod "kube-apiserver-no-preload-491604" is "Ready"
	I1026 09:28:04.213851  505287 pod_ready.go:86] duration metric: took 5.475748ms for pod "kube-apiserver-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.216793  505287 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.394195  505287 pod_ready.go:94] pod "kube-controller-manager-no-preload-491604" is "Ready"
	I1026 09:28:04.394272  505287 pod_ready.go:86] duration metric: took 177.408835ms for pod "kube-controller-manager-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.593889  505287 pod_ready.go:83] waiting for pod "kube-proxy-tpv97" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:04.993545  505287 pod_ready.go:94] pod "kube-proxy-tpv97" is "Ready"
	I1026 09:28:04.993578  505287 pod_ready.go:86] duration metric: took 399.611168ms for pod "kube-proxy-tpv97" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.194021  505287 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.593795  505287 pod_ready.go:94] pod "kube-scheduler-no-preload-491604" is "Ready"
	I1026 09:28:05.593874  505287 pod_ready.go:86] duration metric: took 399.82873ms for pod "kube-scheduler-no-preload-491604" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 09:28:05.593918  505287 pod_ready.go:40] duration metric: took 35.907136054s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 09:28:05.664159  505287 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:28:05.680375  505287 out.go:179] * Done! kubectl is now configured to use "no-preload-491604" cluster and "default" namespace by default
	I1026 09:28:02.074219  509018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:28:02.074487  509018 start.go:159] libmachine.API.Create for "newest-cni-596581" (driver="docker")
	I1026 09:28:02.074536  509018 client.go:168] LocalClient.Create starting
	I1026 09:28:02.074620  509018 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:28:02.074661  509018 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:02.074675  509018 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:02.074771  509018 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:28:02.074801  509018 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:02.074820  509018 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:02.075213  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:28:02.091834  509018 cli_runner.go:211] docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:28:02.091937  509018 network_create.go:284] running [docker network inspect newest-cni-596581] to gather additional debugging logs...
	I1026 09:28:02.091961  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581
	W1026 09:28:02.107915  509018 cli_runner.go:211] docker network inspect newest-cni-596581 returned with exit code 1
	I1026 09:28:02.107947  509018 network_create.go:287] error running [docker network inspect newest-cni-596581]: docker network inspect newest-cni-596581: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-596581 not found
	I1026 09:28:02.107960  509018 network_create.go:289] output of [docker network inspect newest-cni-596581]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-596581 not found
	
	** /stderr **
	I1026 09:28:02.108061  509018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:02.124770  509018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:28:02.125160  509018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:28:02.125424  509018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:28:02.125868  509018 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fb2d0}
	I1026 09:28:02.125894  509018 network_create.go:124] attempt to create docker network newest-cni-596581 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 09:28:02.125951  509018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-596581 newest-cni-596581
	I1026 09:28:02.184527  509018 network_create.go:108] docker network newest-cni-596581 192.168.76.0/24 created
	I1026 09:28:02.184566  509018 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-596581" container
	I1026 09:28:02.184657  509018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:28:02.203585  509018 cli_runner.go:164] Run: docker volume create newest-cni-596581 --label name.minikube.sigs.k8s.io=newest-cni-596581 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:28:02.222226  509018 oci.go:103] Successfully created a docker volume newest-cni-596581
	I1026 09:28:02.222318  509018 cli_runner.go:164] Run: docker run --rm --name newest-cni-596581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-596581 --entrypoint /usr/bin/test -v newest-cni-596581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:28:02.773958  509018 oci.go:107] Successfully prepared a docker volume newest-cni-596581
	I1026 09:28:02.774004  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:02.774024  509018 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:28:02.774125  509018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-596581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 09:28:07.956660  509018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-596581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.182488727s)
	I1026 09:28:07.956696  509018 kic.go:203] duration metric: took 5.18266724s to extract preloaded images to volume ...
	W1026 09:28:07.956839  509018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:28:07.956950  509018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:28:08.022491  509018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-596581 --name newest-cni-596581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-596581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-596581 --network newest-cni-596581 --ip 192.168.76.2 --volume newest-cni-596581:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:28:08.359052  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Running}}
	I1026 09:28:08.380707  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.404793  509018 cli_runner.go:164] Run: docker exec newest-cni-596581 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:28:08.466772  509018 oci.go:144] the created container "newest-cni-596581" has a running status.
	I1026 09:28:08.466805  509018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa...
	I1026 09:28:08.811623  509018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:28:08.848933  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.867583  509018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:28:08.867602  509018 kic_runner.go:114] Args: [docker exec --privileged newest-cni-596581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:28:08.934672  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:08.962985  509018 machine.go:93] provisionDockerMachine start ...
	I1026 09:28:08.963091  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:08.988493  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:08.988829  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:08.988838  509018 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:28:08.989397  509018 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37310->127.0.0.1:33460: read: connection reset by peer
	I1026 09:28:12.142530  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:12.142556  509018 ubuntu.go:182] provisioning hostname "newest-cni-596581"
	I1026 09:28:12.142630  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.160146  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.160462  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.160479  509018 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-596581 && echo "newest-cni-596581" | sudo tee /etc/hostname
	I1026 09:28:12.320083  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:12.320170  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.337550  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.337864  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.337882  509018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-596581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-596581/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-596581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:28:12.491432  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:28:12.491462  509018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:28:12.491504  509018 ubuntu.go:190] setting up certificates
	I1026 09:28:12.491513  509018 provision.go:84] configureAuth start
	I1026 09:28:12.491574  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:12.512652  509018 provision.go:143] copyHostCerts
	I1026 09:28:12.512727  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:28:12.512740  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:28:12.512821  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:28:12.512926  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:28:12.512938  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:28:12.512973  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:28:12.513043  509018 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:28:12.513054  509018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:28:12.513085  509018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:28:12.513138  509018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.newest-cni-596581 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-596581]
	I1026 09:28:12.704716  509018 provision.go:177] copyRemoteCerts
	I1026 09:28:12.704779  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:28:12.704820  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.723076  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:12.830756  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:28:12.849498  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:28:12.868018  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:28:12.885702  509018 provision.go:87] duration metric: took 394.166332ms to configureAuth
	I1026 09:28:12.885733  509018 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:28:12.885923  509018 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:12.886031  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:12.902486  509018 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:12.902852  509018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1026 09:28:12.902876  509018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:28:13.261231  509018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:28:13.261257  509018 machine.go:96] duration metric: took 4.29825198s to provisionDockerMachine
	I1026 09:28:13.261267  509018 client.go:171] duration metric: took 11.186719226s to LocalClient.Create
	I1026 09:28:13.261329  509018 start.go:167] duration metric: took 11.186842755s to libmachine.API.Create "newest-cni-596581"
	I1026 09:28:13.261340  509018 start.go:293] postStartSetup for "newest-cni-596581" (driver="docker")
	I1026 09:28:13.261371  509018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:28:13.261473  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:28:13.261547  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.279532  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.382786  509018 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:28:13.386264  509018 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:28:13.386295  509018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:28:13.386307  509018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:28:13.386361  509018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:28:13.386448  509018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:28:13.386564  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:28:13.393998  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:13.416886  509018 start.go:296] duration metric: took 155.531447ms for postStartSetup
	I1026 09:28:13.417263  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:13.433603  509018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:13.433905  509018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:28:13.433947  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.454434  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.563905  509018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:28:13.572218  509018 start.go:128] duration metric: took 11.501441942s to createHost
	I1026 09:28:13.572247  509018 start.go:83] releasing machines lock for "newest-cni-596581", held for 11.501616541s
	I1026 09:28:13.572325  509018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:13.590143  509018 ssh_runner.go:195] Run: cat /version.json
	I1026 09:28:13.590157  509018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:28:13.590196  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.590223  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:13.610565  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.612088  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:13.813459  509018 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:13.820261  509018 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:28:13.857324  509018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:28:13.861447  509018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:28:13.861563  509018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:28:13.894289  509018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:28:13.894356  509018 start.go:495] detecting cgroup driver to use...
	I1026 09:28:13.894414  509018 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:28:13.894504  509018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:28:13.912686  509018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:28:13.925152  509018 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:28:13.925267  509018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:28:13.941469  509018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:28:13.960631  509018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:28:14.101634  509018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:28:14.222840  509018 docker.go:234] disabling docker service ...
	I1026 09:28:14.222916  509018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:28:14.245768  509018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:28:14.259722  509018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:28:14.385640  509018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:28:14.515275  509018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:28:14.529214  509018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:28:14.544661  509018 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:28:14.544755  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.554154  509018 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:28:14.554284  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.563221  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.572088  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.580963  509018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:28:14.589250  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.599794  509018 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.613216  509018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:14.622281  509018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:28:14.630015  509018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:28:14.637122  509018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:14.754795  509018 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:28:14.894875  509018 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:28:14.894998  509018 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:28:14.899011  509018 start.go:563] Will wait 60s for crictl version
	I1026 09:28:14.899156  509018 ssh_runner.go:195] Run: which crictl
	I1026 09:28:14.902605  509018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:28:14.927266  509018 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:28:14.927430  509018 ssh_runner.go:195] Run: crio --version
	I1026 09:28:14.957131  509018 ssh_runner.go:195] Run: crio --version
	I1026 09:28:15.002153  509018 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:28:15.006700  509018 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:15.045471  509018 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:28:15.051564  509018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:15.066543  509018 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 09:28:15.069286  509018 kubeadm.go:883] updating cluster {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:28:15.069420  509018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:15.069526  509018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:15.108461  509018 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:15.108490  509018 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:28:15.108556  509018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:15.137014  509018 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:15.137041  509018 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:28:15.137050  509018 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:28:15.137145  509018 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-596581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:28:15.137242  509018 ssh_runner.go:195] Run: crio config
	I1026 09:28:15.213856  509018 cni.go:84] Creating CNI manager for ""
	I1026 09:28:15.213928  509018 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:15.213962  509018 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 09:28:15.214018  509018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-596581 NodeName:newest-cni-596581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:28:15.214170  509018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-596581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:28:15.214274  509018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:28:15.222315  509018 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:28:15.222395  509018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:28:15.231527  509018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:28:15.244778  509018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:28:15.258555  509018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 09:28:15.271818  509018 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:28:15.275540  509018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:15.285934  509018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:15.413187  509018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:28:15.435192  509018 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581 for IP: 192.168.76.2
	I1026 09:28:15.435217  509018 certs.go:195] generating shared ca certs ...
	I1026 09:28:15.435245  509018 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:15.435427  509018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:28:15.435495  509018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:28:15.435503  509018 certs.go:257] generating profile certs ...
	I1026 09:28:15.435573  509018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key
	I1026 09:28:15.435590  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt with IP's: []
	I1026 09:28:16.396253  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt ...
	I1026 09:28:16.396287  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.crt: {Name:mk465c263d6ab4eff71cf55e9387547ab875e0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:16.396578  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key ...
	I1026 09:28:16.396603  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key: {Name:mk484ba41ce437341d8cd7d53fc4fe8c6b66c775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:16.396717  509018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff
	I1026 09:28:16.396738  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 09:28:17.062992  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff ...
	I1026 09:28:17.063024  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff: {Name:mk8e7bcb75abe02451d11a4535afae3d3a3bd8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.063221  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff ...
	I1026 09:28:17.063238  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff: {Name:mk5b83bd4b179ab95c8033309350f7a05b40124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.063322  509018 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt.334b42ff -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt
	I1026 09:28:17.063400  509018 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key
	I1026 09:28:17.063460  509018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key
	I1026 09:28:17.063478  509018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt with IP's: []
	I1026 09:28:17.426830  509018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt ...
	I1026 09:28:17.426861  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt: {Name:mk830b958cac374d7ae048dd38d3101fb9f790db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.427055  509018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key ...
	I1026 09:28:17.427070  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key: {Name:mk19586ee06b3298821055295ec68883dd1992bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:17.427267  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:28:17.427309  509018 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:28:17.427324  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:28:17.427349  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:28:17.427376  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:28:17.427401  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:28:17.427443  509018 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:17.428087  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:28:17.446706  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:28:17.467459  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:28:17.487504  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:28:17.511849  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:28:17.533966  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:28:17.553708  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:28:17.576455  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:28:17.607223  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:28:17.641762  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:28:17.661072  509018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:28:17.689286  509018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:28:17.704421  509018 ssh_runner.go:195] Run: openssl version
	I1026 09:28:17.711114  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:28:17.721186  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.726875  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.727060  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:28:17.800349  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:28:17.824637  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:28:17.838938  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.843053  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.843123  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:28:17.892676  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:28:17.902470  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:28:17.911041  509018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.914599  509018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.914672  509018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:17.956874  509018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:28:17.966730  509018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:28:17.971513  509018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:28:17.971562  509018 kubeadm.go:400] StartCluster: {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:17.971640  509018 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:28:17.971702  509018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:28:18.006415  509018 cri.go:89] found id: ""
	I1026 09:28:18.006496  509018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:28:18.017518  509018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:28:18.034934  509018 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:28:18.035000  509018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:28:18.048903  509018 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:28:18.048985  509018 kubeadm.go:157] found existing configuration files:
	
	I1026 09:28:18.049078  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:28:18.056885  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:28:18.056951  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:28:18.069856  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:28:18.083216  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:28:18.083277  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:28:18.092581  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:28:18.101677  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:28:18.101749  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:28:18.110396  509018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:28:18.120037  509018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:28:18.120104  509018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:28:18.128378  509018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:28:18.176805  509018 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:28:18.177167  509018 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:28:18.220701  509018 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:28:18.221016  509018 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:28:18.221063  509018 kubeadm.go:318] OS: Linux
	I1026 09:28:18.221119  509018 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:28:18.221175  509018 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:28:18.221230  509018 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:28:18.221283  509018 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:28:18.221337  509018 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:28:18.221391  509018 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:28:18.221442  509018 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:28:18.221497  509018 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:28:18.221549  509018 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:28:18.313433  509018 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:28:18.313555  509018 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:28:18.313719  509018 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:28:18.331389  509018 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:28:18.337150  509018 out.go:252]   - Generating certificates and keys ...
	I1026 09:28:18.337253  509018 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 09:28:18.337329  509018 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 09:28:18.756222  509018 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 09:28:18.951930  509018 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 09:28:19.393563  509018 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 09:28:20.252645  509018 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 09:28:21.203786  509018 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 09:28:21.204100  509018 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-596581] and IPs [192.168.76.2 127.0.0.1 ::1]
	
	
	==> CRI-O <==
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.217519164Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221606979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221638635Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.221658131Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225576713Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225628332Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.225652817Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.233436082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.233492837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.23351806Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.241769746Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 09:28:09 no-preload-491604 crio[654]: time="2025-10-26T09:28:09.241812429Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.359571493Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=54ecf080-389d-474d-938b-44e1e4a6d642 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.361158599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fa8c70b-5d3c-401b-898c-91f31b0e30bd name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.36211871Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=cdcb2cab-cb2d-41e9-81cc-285b65c932b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.36223218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.373326594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.375413522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.414376682Z" level=info msg="Created container 2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=cdcb2cab-cb2d-41e9-81cc-285b65c932b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.415509103Z" level=info msg="Starting container: 2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618" id=d94c2837-61f0-4443-9db8-bd42a02d39b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.421647588Z" level=info msg="Started container" PID=1729 containerID=2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper id=d94c2837-61f0-4443-9db8-bd42a02d39b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039
	Oct 26 09:28:16 no-preload-491604 conmon[1727]: conmon 2fc15d4bd85eea90ae8b <ninfo>: container 1729 exited with status 1
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.6767807Z" level=info msg="Removing container: cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.688304625Z" level=info msg="Error loading conmon cgroup of container cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967: cgroup deleted" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 09:28:16 no-preload-491604 crio[654]: time="2025-10-26T09:28:16.691998794Z" level=info msg="Removed container cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t/dashboard-metrics-scraper" id=68fcc658-3f7f-422c-b31b-00499b08adc2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2fc15d4bd85ee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   bebe705b6ff17       dashboard-metrics-scraper-6ffb444bf9-dvz8t   kubernetes-dashboard
	8da19ca23d0c5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   93e128fdd51ac       storage-provisioner                          kube-system
	edf0a0964a713       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   08f09afd10753       kubernetes-dashboard-855c9754f9-7ljxx        kubernetes-dashboard
	f748b54f0c957       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   50c8d5a6aaa71       coredns-66bc5c9577-2rq75                     kube-system
	d51da6dc1c75f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   b01cdc7ca5b48       busybox                                      default
	c36c847447488       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   93e128fdd51ac       storage-provisioner                          kube-system
	1b92235864cad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   5076854c63372       kindnet-4g8pl                                kube-system
	0e1f3ecf7a18e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   3e7cd333d2ccb       kube-proxy-tpv97                             kube-system
	4ccfa38d7bc4a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f0bcb0e1202ad       kube-apiserver-no-preload-491604             kube-system
	23d945999b91d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6f005a7174981       kube-controller-manager-no-preload-491604    kube-system
	69cf58b8f57ce       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   24ab742aa9ee6       kube-scheduler-no-preload-491604             kube-system
	4df7dd9514509       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   fef92a9b54a8b       etcd-no-preload-491604                       kube-system
	
	
	==> coredns [f748b54f0c957727dc6734b5f001264bcaaa3216f63bcf86d3463da0c8757dd9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56382 - 9828 "HINFO IN 5968450186337636727.7609667540600921879. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035366673s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-491604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-491604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-491604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_26_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-491604
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:28:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 09:28:08 +0000   Sun, 26 Oct 2025 09:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-491604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d17013c2-3271-42c0-8ce8-feb077b52c71
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-2rq75                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-491604                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-4g8pl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-491604              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-no-preload-491604     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-tpv97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-no-preload-491604              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dvz8t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7ljxx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s                   node-controller  Node no-preload-491604 event: Registered Node no-preload-491604 in Controller
	  Normal   NodeReady                101s                   kubelet          Node no-preload-491604 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node no-preload-491604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node no-preload-491604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node no-preload-491604 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node no-preload-491604 event: Registered Node no-preload-491604 in Controller
	
	
	==> dmesg <==
	[Oct26 09:04] overlayfs: idmapped layers are currently not supported
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4df7dd95145090d3057188e3620cf6a25f5da49045c8298badfb2b145e77cf81] <==
	{"level":"warn","ts":"2025-10-26T09:27:25.664220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.683808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.719316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.733388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.757316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.770183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.792237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.814070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.825425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.851691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.880474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.891856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.910604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.926173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.956900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.965810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:25.987949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.003650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.025480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.040714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.065520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.095827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.111573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.127342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:27:26.231196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:28:24 up  3:10,  0 user,  load average: 2.64, 3.40, 2.98
	Linux no-preload-491604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b92235864cad4c9d08a369f04045fb50159db67c04f870fa045d26a1a364397] <==
	I1026 09:27:29.015106       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:27:29.015585       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 09:27:29.015799       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:27:29.015848       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:27:29.015892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:27:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:27:29.208856       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:27:29.209653       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:27:29.209739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:27:29.299234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 09:27:59.211455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 09:27:59.211532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 09:27:59.300016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 09:27:59.300193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 09:28:00.810122       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 09:28:00.810158       1 metrics.go:72] Registering metrics
	I1026 09:28:00.811006       1 controller.go:711] "Syncing nftables rules"
	I1026 09:28:09.209111       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:28:09.209178       1 main.go:301] handling current node
	I1026 09:28:19.209553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 09:28:19.209594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ccfa38d7bc4a98e8c1aaf5f20ea2a8b9b48d647982ac7f52522c04c838d695e] <==
	I1026 09:27:27.436857       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:27:27.475512       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 09:27:27.475595       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 09:27:27.475785       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 09:27:27.476033       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 09:27:27.476048       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 09:27:27.476136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 09:27:27.477056       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 09:27:27.477072       1 policy_source.go:240] refreshing policies
	I1026 09:27:27.492142       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:27:27.492302       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:27:27.503937       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:27:27.511644       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 09:27:27.549412       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:27:27.989087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:27:28.338945       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:27:28.350016       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:27:28.511072       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:27:28.775353       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:27:28.846455       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:27:29.071657       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.249.107"}
	I1026 09:27:29.091702       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.251.98"}
	I1026 09:27:31.012790       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:27:31.415746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:27:31.504679       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [23d945999b91d55ecc1428312d8093f362e2eec0dc5f7df30a9d6f75b0350ff5] <==
	I1026 09:27:30.969196       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 09:27:30.969202       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 09:27:30.969435       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:27:30.973626       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:27:30.974828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:27:30.974842       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:27:30.978109       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:27:30.980271       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 09:27:30.980433       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 09:27:30.980512       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-491604"
	I1026 09:27:30.980563       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 09:27:30.984882       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 09:27:30.986961       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:27:30.998547       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 09:27:30.998699       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:30.998887       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:27:30.998918       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:27:30.999002       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:27:30.998815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:27:31.000085       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:27:31.000387       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:27:31.008998       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:27:31.026900       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:27:31.031287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:27:31.033414       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [0e1f3ecf7a18ed6b903c497328088b59492d3347ea87dbcf4e7ac422e8ec654b] <==
	I1026 09:27:28.970575       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:27:29.103028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:27:29.213613       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:27:29.213720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 09:27:29.213856       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:27:29.232409       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:27:29.232457       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:27:29.235972       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:27:29.236298       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:27:29.236321       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:29.237445       1 config.go:200] "Starting service config controller"
	I1026 09:27:29.237507       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:27:29.237974       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:27:29.238071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:27:29.238168       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:27:29.238200       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:27:29.239033       1 config.go:309] "Starting node config controller"
	I1026 09:27:29.240363       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:27:29.245053       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:27:29.338294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:27:29.338336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 09:27:29.338564       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [69cf58b8f57cebd6e3160b7c720d3edbb72ee084b5649d5326bd03272ea49f4b] <==
	I1026 09:27:26.289788       1 serving.go:386] Generated self-signed cert in-memory
	I1026 09:27:28.097076       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:27:28.097142       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:27:28.130575       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 09:27:28.132056       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:27:28.132713       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 09:27:28.133385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.133404       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.133422       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:27:28.133454       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 09:27:28.133646       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:27:28.239851       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 09:27:28.242885       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:27:28.248227       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: I1026 09:27:31.741536     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8nm\" (UniqueName: \"kubernetes.io/projected/f81533b4-a61b-4898-9998-d631198e8d6b-kube-api-access-ds8nm\") pod \"kubernetes-dashboard-855c9754f9-7ljxx\" (UID: \"f81533b4-a61b-4898-9998-d631198e8d6b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7ljxx"
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: I1026 09:27:31.741558     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzdgm\" (UniqueName: \"kubernetes.io/projected/10b3e082-a6aa-4a89-ab0d-b6279ad647bd-kube-api-access-xzdgm\") pod \"dashboard-metrics-scraper-6ffb444bf9-dvz8t\" (UID: \"10b3e082-a6aa-4a89-ab0d-b6279ad647bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t"
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: W1026 09:27:31.961662     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039 WatchSource:0}: Error finding container bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039: Status 404 returned error can't find the container with id bebe705b6ff1792277a2af10dbb82ed6c3b9c7cb0136658a603cca2582301039
	Oct 26 09:27:31 no-preload-491604 kubelet[773]: W1026 09:27:31.979210     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b11d118592378c19e5198891ca8140a611839fcb32162486f5b254dced053db/crio-08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe WatchSource:0}: Error finding container 08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe: Status 404 returned error can't find the container with id 08f09afd107533a6f110acbd193bfce5c2d9b4112736ece779d6923f0b3755fe
	Oct 26 09:27:37 no-preload-491604 kubelet[773]: I1026 09:27:37.543282     773 scope.go:117] "RemoveContainer" containerID="fee2b8b2cd3fca3d533c5961b879840c535728341a86bee08e88fa21d5a19541"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: I1026 09:27:38.547989     773 scope.go:117] "RemoveContainer" containerID="fee2b8b2cd3fca3d533c5961b879840c535728341a86bee08e88fa21d5a19541"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: I1026 09:27:38.548354     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:38 no-preload-491604 kubelet[773]: E1026 09:27:38.548538     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:41 no-preload-491604 kubelet[773]: I1026 09:27:41.917542     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:41 no-preload-491604 kubelet[773]: E1026 09:27:41.917726     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.358684     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.591448     773 scope.go:117] "RemoveContainer" containerID="25d6600994aef098f4b22998d5885d993c57d76e25d6211eda997292bf0d1873"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.591740     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: E1026 09:27:53.591906     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:27:53 no-preload-491604 kubelet[773]: I1026 09:27:53.612775     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7ljxx" podStartSLOduration=13.048151517 podStartE2EDuration="22.6127519s" podCreationTimestamp="2025-10-26 09:27:31 +0000 UTC" firstStartedPulling="2025-10-26 09:27:31.983022481 +0000 UTC m=+11.023909842" lastFinishedPulling="2025-10-26 09:27:41.547622864 +0000 UTC m=+20.588510225" observedRunningTime="2025-10-26 09:27:42.575825013 +0000 UTC m=+21.616712391" watchObservedRunningTime="2025-10-26 09:27:53.6127519 +0000 UTC m=+32.653639261"
	Oct 26 09:27:59 no-preload-491604 kubelet[773]: I1026 09:27:59.609214     773 scope.go:117] "RemoveContainer" containerID="c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f"
	Oct 26 09:28:01 no-preload-491604 kubelet[773]: I1026 09:28:01.917615     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:01 no-preload-491604 kubelet[773]: E1026 09:28:01.918218     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.358561     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.665808     773 scope.go:117] "RemoveContainer" containerID="cc2520321cdb073c010aa38327e8bf9d3ab334ec63cc4a48fcc773f033cec967"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: I1026 09:28:16.666559     773 scope.go:117] "RemoveContainer" containerID="2fc15d4bd85eea90ae8b1546d0cd2d9c458e3ee301b9a2e99f26069f8096c618"
	Oct 26 09:28:16 no-preload-491604 kubelet[773]: E1026 09:28:16.667981     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dvz8t_kubernetes-dashboard(10b3e082-a6aa-4a89-ab0d-b6279ad647bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dvz8t" podUID="10b3e082-a6aa-4a89-ab0d-b6279ad647bd"
	Oct 26 09:28:18 no-preload-491604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:28:18 no-preload-491604 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:28:18 no-preload-491604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [edf0a0964a7138fd0ed9bdee8fc158b483d259210ac09da89a80268c5f916cb1] <==
	2025/10/26 09:27:41 Using namespace: kubernetes-dashboard
	2025/10/26 09:27:41 Using in-cluster config to connect to apiserver
	2025/10/26 09:27:41 Using secret token for csrf signing
	2025/10/26 09:27:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 09:27:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 09:27:41 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 09:27:41 Generating JWE encryption key
	2025/10/26 09:27:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 09:27:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 09:27:42 Initializing JWE encryption key from synchronized object
	2025/10/26 09:27:42 Creating in-cluster Sidecar client
	2025/10/26 09:27:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:42 Serving insecurely on HTTP port: 9090
	2025/10/26 09:28:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 09:27:41 Starting overwatch
	
	
	==> storage-provisioner [8da19ca23d0c5adbccbdd3a7e174a027217fe43e5513e87036c1f2214619818a] <==
	I1026 09:27:59.665804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 09:27:59.684009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 09:27:59.684064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 09:27:59.686207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:03.141574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:07.407993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:11.007855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:14.062119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.085190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.091960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:28:17.092123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 09:28:17.094624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c!
	I1026 09:28:17.102591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53d2d1f5-98c5-4ce4-af2a-3a4b0bc16b41", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c became leader
	W1026 09:28:17.104054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:17.135044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 09:28:17.195372       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-491604_523fab44-67b0-45ad-a581-f2dba74cb48c!
	W1026 09:28:19.138655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:19.147178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:21.150926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:21.159511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:23.162099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 09:28:23.166989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c36c84744748869b599fb44b99c426fc1f10fb2c928dea1d4738240b0c03006f] <==
	I1026 09:27:28.990982       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 09:27:58.997611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491604 -n no-preload-491604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491604 -n no-preload-491604: exit status 2 (465.477432ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-491604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (398.08659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:28:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-596581
helpers_test.go:243: (dbg) docker inspect newest-cni-596581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	        "Created": "2025-10-26T09:28:08.038286143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509482,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:28:08.128372302Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hostname",
	        "HostsPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hosts",
	        "LogPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81-json.log",
	        "Name": "/newest-cni-596581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-596581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-596581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	                "LowerDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-596581",
	                "Source": "/var/lib/docker/volumes/newest-cni-596581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-596581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-596581",
	                "name.minikube.sigs.k8s.io": "newest-cni-596581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "052ace79adf7784b57ba669c8de06d1cc3a19dbf8586707acd25a80242e3caff",
	            "SandboxKey": "/var/run/docker/netns/052ace79adf7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-596581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:7b:49:82:64:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3579436470ca4b2c8964527b6b8432c0aea2af9e0a0728e90452b5864afaf1c5",
	                    "EndpointID": "4c0787ac1bed3ed6b437948b1febc523ee049a4ce52f4e9ebf3b89555f5ed900",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-596581",
	                        "d784789bf46e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25: (1.416853402s)
E1026 09:28:49.264569  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ default-k8s-diff-port-289159 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ pause   │ -p default-k8s-diff-port-289159 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p default-k8s-diff-port-289159                                                                                                                                                                                                               │ default-k8s-diff-port-289159 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ delete  │ -p disable-driver-mounts-434228                                                                                                                                                                                                               │ disable-driver-mounts-434228 │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:25 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                                                                                                   │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381           │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ no-preload-491604 image list --format=json                                                                                                                                                                                                    │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ pause   │ -p no-preload-491604 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p auto-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-796399                  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-596581            │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:28:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:28:28.688100  512470 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:28.688367  512470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:28.688402  512470 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:28.688425  512470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:28.688714  512470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:28.689183  512470 out.go:368] Setting JSON to false
	I1026 09:28:28.690210  512470 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11459,"bootTime":1761459450,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:28:28.690315  512470 start.go:141] virtualization:  
	I1026 09:28:28.694128  512470 out.go:179] * [auto-796399] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:28:28.698635  512470 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:28:28.698765  512470 notify.go:220] Checking for updates...
	I1026 09:28:28.705442  512470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:28:28.708676  512470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:28.711797  512470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:28:28.714951  512470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:28:28.717973  512470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:28:28.721601  512470 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:28.721739  512470 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:28:28.752902  512470 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:28:28.753061  512470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:28.812700  512470 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:28.803458306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:28.812813  512470 docker.go:318] overlay module found
	I1026 09:28:28.816006  512470 out.go:179] * Using the docker driver based on user configuration
	I1026 09:28:28.818997  512470 start.go:305] selected driver: docker
	I1026 09:28:28.819018  512470 start.go:925] validating driver "docker" against <nil>
	I1026 09:28:28.819033  512470 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:28:28.819817  512470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:28.876873  512470 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:28.867812552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:28.877034  512470 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 09:28:28.877282  512470 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 09:28:28.880192  512470 out.go:179] * Using Docker driver with root privileges
	I1026 09:28:28.883031  512470 cni.go:84] Creating CNI manager for ""
	I1026 09:28:28.883103  512470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:28.883117  512470 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 09:28:28.883197  512470 start.go:349] cluster config:
	{Name:auto-796399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-796399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 09:28:28.886385  512470 out.go:179] * Starting "auto-796399" primary control-plane node in "auto-796399" cluster
	I1026 09:28:28.889175  512470 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:28:28.892044  512470 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:28:28.894897  512470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:28.894951  512470 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:28:28.894963  512470 cache.go:58] Caching tarball of preloaded images
	I1026 09:28:28.894992  512470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:28:28.895051  512470 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:28:28.895070  512470 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:28:28.895184  512470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/config.json ...
	I1026 09:28:28.895209  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/config.json: {Name:mk3d2476964e3734c1780ef828d41bd2de7f001e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:28.916193  512470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:28:28.916216  512470 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:28:28.916230  512470 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:28:28.916253  512470 start.go:360] acquireMachinesLock for auto-796399: {Name:mk233dfecbde58beb43e1374bf2e5d802a6cb78f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:28:28.916376  512470 start.go:364] duration metric: took 107.686µs to acquireMachinesLock for "auto-796399"
	I1026 09:28:28.916410  512470 start.go:93] Provisioning new machine with config: &{Name:auto-796399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-796399 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:28:28.916486  512470 start.go:125] createHost starting for "" (driver="docker")
	I1026 09:28:28.669284  509018 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001729358s
	I1026 09:28:28.673836  509018 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:28:28.674085  509018 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 09:28:28.674637  509018 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:28:28.675116  509018 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 09:28:28.920442  512470 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 09:28:28.920785  512470 start.go:159] libmachine.API.Create for "auto-796399" (driver="docker")
	I1026 09:28:28.920864  512470 client.go:168] LocalClient.Create starting
	I1026 09:28:28.921004  512470 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem
	I1026 09:28:28.921043  512470 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:28.921092  512470 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:28.921183  512470 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem
	I1026 09:28:28.921213  512470 main.go:141] libmachine: Decoding PEM data...
	I1026 09:28:28.921256  512470 main.go:141] libmachine: Parsing certificate...
	I1026 09:28:28.922158  512470 cli_runner.go:164] Run: docker network inspect auto-796399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 09:28:28.948635  512470 cli_runner.go:211] docker network inspect auto-796399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 09:28:28.948723  512470 network_create.go:284] running [docker network inspect auto-796399] to gather additional debugging logs...
	I1026 09:28:28.948739  512470 cli_runner.go:164] Run: docker network inspect auto-796399
	W1026 09:28:28.967247  512470 cli_runner.go:211] docker network inspect auto-796399 returned with exit code 1
	I1026 09:28:28.967276  512470 network_create.go:287] error running [docker network inspect auto-796399]: docker network inspect auto-796399: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-796399 not found
	I1026 09:28:28.967289  512470 network_create.go:289] output of [docker network inspect auto-796399]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-796399 not found
	
	** /stderr **
	I1026 09:28:28.967393  512470 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:28.985700  512470 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
	I1026 09:28:28.986055  512470 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1cb8c9e02aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:57:21:82:79:73} reservation:<nil>}
	I1026 09:28:28.986316  512470 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8406af390b09 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:1a:81:bc:01:0d} reservation:<nil>}
	I1026 09:28:28.986946  512470 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3579436470ca IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:b0:23:d6:2d:19} reservation:<nil>}
	I1026 09:28:28.987425  512470 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a88100}
	I1026 09:28:28.987444  512470 network_create.go:124] attempt to create docker network auto-796399 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 09:28:28.987496  512470 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-796399 auto-796399
	I1026 09:28:29.062474  512470 network_create.go:108] docker network auto-796399 192.168.85.0/24 created
	I1026 09:28:29.062503  512470 kic.go:121] calculated static IP "192.168.85.2" for the "auto-796399" container
	I1026 09:28:29.062589  512470 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 09:28:29.079549  512470 cli_runner.go:164] Run: docker volume create auto-796399 --label name.minikube.sigs.k8s.io=auto-796399 --label created_by.minikube.sigs.k8s.io=true
	I1026 09:28:29.104532  512470 oci.go:103] Successfully created a docker volume auto-796399
	I1026 09:28:29.104613  512470 cli_runner.go:164] Run: docker run --rm --name auto-796399-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-796399 --entrypoint /usr/bin/test -v auto-796399:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 09:28:29.794249  512470 oci.go:107] Successfully prepared a docker volume auto-796399
	I1026 09:28:29.794295  512470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:29.794315  512470 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 09:28:29.794377  512470 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-796399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 09:28:36.788817  509018 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 8.113482107s
	I1026 09:28:34.582411  512470 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-796399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.787997427s)
	I1026 09:28:34.582444  512470 kic.go:203] duration metric: took 4.7881257s to extract preloaded images to volume ...
	W1026 09:28:34.582593  512470 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 09:28:34.582692  512470 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 09:28:34.672374  512470 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-796399 --name auto-796399 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-796399 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-796399 --network auto-796399 --ip 192.168.85.2 --volume auto-796399:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 09:28:35.064458  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Running}}
	I1026 09:28:35.097057  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:28:35.137761  512470 cli_runner.go:164] Run: docker exec auto-796399 stat /var/lib/dpkg/alternatives/iptables
	I1026 09:28:35.200558  512470 oci.go:144] the created container "auto-796399" has a running status.
	I1026 09:28:35.200585  512470 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa...
	I1026 09:28:35.660029  512470 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 09:28:35.682157  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:28:35.718911  512470 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 09:28:35.718931  512470 kic_runner.go:114] Args: [docker exec --privileged auto-796399 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 09:28:35.775808  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:28:35.795350  512470 machine.go:93] provisionDockerMachine start ...
	I1026 09:28:35.795437  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:35.818172  512470 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:35.818506  512470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1026 09:28:35.818517  512470 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:28:35.819452  512470 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55474->127.0.0.1:33465: read: connection reset by peer
	I1026 09:28:36.968033  509018 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.292444977s
	I1026 09:28:38.676836  509018 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.002303769s
	I1026 09:28:38.696936  509018 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:28:38.713568  509018 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:28:38.729851  509018 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:28:38.730069  509018 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-596581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:28:38.743657  509018 kubeadm.go:318] [bootstrap-token] Using token: 0yzzd3.nbttq11cidsqx1mc
	I1026 09:28:38.746597  509018 out.go:252]   - Configuring RBAC rules ...
	I1026 09:28:38.746769  509018 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:28:38.751311  509018 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:28:38.760497  509018 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:28:38.764496  509018 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:28:38.770994  509018 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:28:38.774951  509018 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:28:39.086599  509018 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:28:39.512505  509018 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:28:40.088253  509018 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:28:40.090017  509018 kubeadm.go:318] 
	I1026 09:28:40.090099  509018 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:28:40.090111  509018 kubeadm.go:318] 
	I1026 09:28:40.090192  509018 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:28:40.090197  509018 kubeadm.go:318] 
	I1026 09:28:40.090224  509018 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:28:40.091012  509018 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:28:40.091086  509018 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:28:40.091093  509018 kubeadm.go:318] 
	I1026 09:28:40.091150  509018 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:28:40.091156  509018 kubeadm.go:318] 
	I1026 09:28:40.091206  509018 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:28:40.091210  509018 kubeadm.go:318] 
	I1026 09:28:40.091265  509018 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:28:40.091344  509018 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:28:40.091415  509018 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:28:40.091425  509018 kubeadm.go:318] 
	I1026 09:28:40.091799  509018 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:28:40.091890  509018 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:28:40.091898  509018 kubeadm.go:318] 
	I1026 09:28:40.092235  509018 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0yzzd3.nbttq11cidsqx1mc \
	I1026 09:28:40.092353  509018 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:28:40.093129  509018 kubeadm.go:318] 	--control-plane 
	I1026 09:28:40.093142  509018 kubeadm.go:318] 
	I1026 09:28:40.093480  509018 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:28:40.093493  509018 kubeadm.go:318] 
	I1026 09:28:40.093825  509018 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0yzzd3.nbttq11cidsqx1mc \
	I1026 09:28:40.094170  509018 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:28:40.100892  509018 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:28:40.101146  509018 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:28:40.101262  509018 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:28:40.101438  509018 cni.go:84] Creating CNI manager for ""
	I1026 09:28:40.101450  509018 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:40.104839  509018 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:28:40.107891  509018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:28:40.119588  509018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:28:40.119682  509018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:28:40.150254  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:28:40.648294  509018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:28:40.648421  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:40.648504  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-596581 minikube.k8s.io/updated_at=2025_10_26T09_28_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=newest-cni-596581 minikube.k8s.io/primary=true
	I1026 09:28:40.860435  509018 ops.go:34] apiserver oom_adj: -16
	I1026 09:28:40.860540  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:41.360847  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:38.970980  512470 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-796399
	
	I1026 09:28:38.971004  512470 ubuntu.go:182] provisioning hostname "auto-796399"
	I1026 09:28:38.971073  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:39.001708  512470 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:39.002040  512470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1026 09:28:39.002052  512470 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-796399 && echo "auto-796399" | sudo tee /etc/hostname
	I1026 09:28:39.188970  512470 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-796399
	
	I1026 09:28:39.189099  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:39.215290  512470 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:39.215614  512470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1026 09:28:39.215636  512470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-796399' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-796399/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-796399' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:28:39.374105  512470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:28:39.374143  512470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:28:39.374178  512470 ubuntu.go:190] setting up certificates
	I1026 09:28:39.374195  512470 provision.go:84] configureAuth start
	I1026 09:28:39.374290  512470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-796399
	I1026 09:28:39.404801  512470 provision.go:143] copyHostCerts
	I1026 09:28:39.404871  512470 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:28:39.404890  512470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:28:39.404970  512470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:28:39.405061  512470 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:28:39.405066  512470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:28:39.405091  512470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:28:39.405139  512470 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:28:39.405144  512470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:28:39.405165  512470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:28:39.405223  512470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.auto-796399 san=[127.0.0.1 192.168.85.2 auto-796399 localhost minikube]
	I1026 09:28:39.832348  512470 provision.go:177] copyRemoteCerts
	I1026 09:28:39.832443  512470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:28:39.832505  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:39.851756  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:28:39.959295  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:28:39.981039  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1026 09:28:40.019351  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 09:28:40.049057  512470 provision.go:87] duration metric: took 674.836119ms to configureAuth
	I1026 09:28:40.049093  512470 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:28:40.049376  512470 config.go:182] Loaded profile config "auto-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:40.049542  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:40.071249  512470 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:40.071628  512470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1026 09:28:40.071653  512470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:28:40.396149  512470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:28:40.396174  512470 machine.go:96] duration metric: took 4.600805341s to provisionDockerMachine
	I1026 09:28:40.396183  512470 client.go:171] duration metric: took 11.475309325s to LocalClient.Create
	I1026 09:28:40.396217  512470 start.go:167] duration metric: took 11.475433774s to libmachine.API.Create "auto-796399"
	I1026 09:28:40.396229  512470 start.go:293] postStartSetup for "auto-796399" (driver="docker")
	I1026 09:28:40.396239  512470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:28:40.396320  512470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:28:40.396381  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:40.420018  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:28:40.543591  512470 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:28:40.547576  512470 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:28:40.547603  512470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:28:40.547614  512470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:28:40.547674  512470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:28:40.547759  512470 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:28:40.547865  512470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:28:40.555919  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:40.576550  512470 start.go:296] duration metric: took 180.306634ms for postStartSetup
	I1026 09:28:40.577016  512470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-796399
	I1026 09:28:40.598483  512470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/config.json ...
	I1026 09:28:40.598918  512470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:28:40.598967  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:40.631119  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:28:40.740309  512470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:28:40.746643  512470 start.go:128] duration metric: took 11.830140832s to createHost
	I1026 09:28:40.746668  512470 start.go:83] releasing machines lock for "auto-796399", held for 11.830284473s
	I1026 09:28:40.746753  512470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-796399
	I1026 09:28:40.776330  512470 ssh_runner.go:195] Run: cat /version.json
	I1026 09:28:40.776388  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:40.776963  512470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:28:40.777045  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:28:40.807160  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:28:40.812517  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:28:40.934817  512470 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:41.064370  512470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:28:41.111462  512470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:28:41.116307  512470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:28:41.116376  512470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:28:41.150914  512470 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 09:28:41.150941  512470 start.go:495] detecting cgroup driver to use...
	I1026 09:28:41.150985  512470 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:28:41.151040  512470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:28:41.171509  512470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:28:41.184500  512470 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:28:41.184570  512470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:28:41.203457  512470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:28:41.225620  512470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:28:41.359117  512470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:28:41.539859  512470 docker.go:234] disabling docker service ...
	I1026 09:28:41.539944  512470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:28:41.572542  512470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:28:41.588425  512470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:28:41.711138  512470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:28:41.842594  512470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:28:41.856170  512470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:28:41.883166  512470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:28:41.883248  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.899872  512470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:28:41.900022  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.910574  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.921829  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.933515  512470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:28:41.943479  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.953464  512470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.973115  512470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:41.984790  512470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:28:41.995148  512470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:28:42.007753  512470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:42.144683  512470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:28:42.304593  512470 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:28:42.304717  512470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:28:42.312746  512470 start.go:563] Will wait 60s for crictl version
	I1026 09:28:42.312861  512470 ssh_runner.go:195] Run: which crictl
	I1026 09:28:42.317328  512470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:28:42.345192  512470 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:28:42.345335  512470 ssh_runner.go:195] Run: crio --version
	I1026 09:28:42.386533  512470 ssh_runner.go:195] Run: crio --version
	I1026 09:28:42.438290  512470 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:28:42.441245  512470 cli_runner.go:164] Run: docker network inspect auto-796399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:42.466199  512470 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 09:28:42.470451  512470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:42.481413  512470 kubeadm.go:883] updating cluster {Name:auto-796399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-796399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:28:42.481532  512470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:42.481588  512470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:42.515902  512470 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:42.515925  512470 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:28:42.515980  512470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:42.541469  512470 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:42.541494  512470 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:28:42.541510  512470 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 09:28:42.541669  512470 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-796399 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-796399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:28:42.541788  512470 ssh_runner.go:195] Run: crio config
	I1026 09:28:42.622021  512470 cni.go:84] Creating CNI manager for ""
	I1026 09:28:42.622047  512470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:42.622064  512470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 09:28:42.622117  512470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-796399 NodeName:auto-796399 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:28:42.622308  512470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-796399"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:28:42.622425  512470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:28:42.631387  512470 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:28:42.631490  512470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:28:42.640059  512470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1026 09:28:42.653915  512470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:28:42.669742  512470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1026 09:28:42.683015  512470 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:28:42.687330  512470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:42.697202  512470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:42.826437  512470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:28:42.842901  512470 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399 for IP: 192.168.85.2
	I1026 09:28:42.842925  512470 certs.go:195] generating shared ca certs ...
	I1026 09:28:42.842942  512470 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:42.843078  512470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:28:42.843126  512470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:28:42.843139  512470 certs.go:257] generating profile certs ...
	I1026 09:28:42.843193  512470 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.key
	I1026 09:28:42.843220  512470 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt with IP's: []
	I1026 09:28:43.169629  512470 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt ...
	I1026 09:28:43.169664  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt: {Name:mk15269c0d6a9039e25470911385640897b3b8d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:43.169873  512470 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.key ...
	I1026 09:28:43.169886  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.key: {Name:mk259870ce7037d030854228a8e984c7dc33ab8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:43.169987  512470 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key.1f59dc02
	I1026 09:28:43.170007  512470 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt.1f59dc02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 09:28:41.861253  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:42.361598  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:42.861619  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:43.360603  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:43.861234  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:44.360651  509018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:28:44.529683  509018 kubeadm.go:1113] duration metric: took 3.881308899s to wait for elevateKubeSystemPrivileges
	I1026 09:28:44.529712  509018 kubeadm.go:402] duration metric: took 26.558153469s to StartCluster
	I1026 09:28:44.529729  509018 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:44.529796  509018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:44.530422  509018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:44.530645  509018 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:28:44.530765  509018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:28:44.531012  509018 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:44.531058  509018 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:28:44.531120  509018 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-596581"
	I1026 09:28:44.531141  509018 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-596581"
	I1026 09:28:44.531162  509018 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:28:44.531677  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:44.532151  509018 addons.go:69] Setting default-storageclass=true in profile "newest-cni-596581"
	I1026 09:28:44.532171  509018 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-596581"
	I1026 09:28:44.532458  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:44.534119  509018 out.go:179] * Verifying Kubernetes components...
	I1026 09:28:44.537240  509018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:44.579534  509018 addons.go:238] Setting addon default-storageclass=true in "newest-cni-596581"
	I1026 09:28:44.579572  509018 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:28:44.579982  509018 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:44.596066  509018 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:28:43.711821  512470 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt.1f59dc02 ...
	I1026 09:28:43.711981  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt.1f59dc02: {Name:mk9aebde3c96e25d86868c197aa930c775fdd2b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:43.713100  512470 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key.1f59dc02 ...
	I1026 09:28:43.713224  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key.1f59dc02: {Name:mk9c757d5672d4e29c580f24eb9824bb40e2c91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:43.713406  512470 certs.go:382] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt.1f59dc02 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt
	I1026 09:28:43.713561  512470 certs.go:386] copying /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key.1f59dc02 -> /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key
	I1026 09:28:43.713652  512470 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.key
	I1026 09:28:43.713685  512470 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.crt with IP's: []
	I1026 09:28:44.429140  512470 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.crt ...
	I1026 09:28:44.429171  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.crt: {Name:mk7e5cd923c6286c8f4860ff6e8b1e39cf6d5b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:44.429438  512470 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.key ...
	I1026 09:28:44.429457  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.key: {Name:mke20266593d4c0c821f0bfcd1644af02c9c2393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:28:44.429694  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:28:44.429755  512470 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:28:44.429772  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:28:44.429812  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:28:44.429867  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:28:44.429898  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:28:44.429961  512470 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:44.430533  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:28:44.452688  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:28:44.477964  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:28:44.499909  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:28:44.611837  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1026 09:28:44.676557  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:28:44.706473  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:28:44.737219  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 09:28:44.771774  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:28:44.805555  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:28:44.836485  512470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:28:44.858580  512470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:28:44.873908  512470 ssh_runner.go:195] Run: openssl version
	I1026 09:28:44.886138  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:28:44.897244  512470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:44.907408  512470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:44.907476  512470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:28:44.960004  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:28:44.969454  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:28:44.980166  512470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:28:44.986056  512470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:28:44.986124  512470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:28:45.043697  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:28:45.057926  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:28:45.071914  512470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:28:45.076948  512470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:28:45.077070  512470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:28:45.145064  512470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:28:45.158554  512470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:28:45.164640  512470 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 09:28:45.164732  512470 kubeadm.go:400] StartCluster: {Name:auto-796399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-796399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:45.164830  512470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:28:45.164957  512470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:28:45.222869  512470 cri.go:89] found id: ""
	I1026 09:28:45.223004  512470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:28:45.240283  512470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 09:28:45.269363  512470 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 09:28:45.269492  512470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 09:28:45.295915  512470 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 09:28:45.295951  512470 kubeadm.go:157] found existing configuration files:
	
	I1026 09:28:45.296036  512470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 09:28:45.327986  512470 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 09:28:45.328117  512470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 09:28:45.364991  512470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 09:28:45.391373  512470 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 09:28:45.391485  512470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 09:28:45.408959  512470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 09:28:45.428505  512470 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 09:28:45.428604  512470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 09:28:45.441866  512470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 09:28:45.455635  512470 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 09:28:45.455707  512470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 09:28:45.467392  512470 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 09:28:45.537965  512470 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 09:28:45.540815  512470 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 09:28:45.588023  512470 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 09:28:45.588129  512470 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 09:28:45.588199  512470 kubeadm.go:318] OS: Linux
	I1026 09:28:45.588280  512470 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 09:28:45.588371  512470 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 09:28:45.588446  512470 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 09:28:45.588520  512470 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 09:28:45.588593  512470 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 09:28:45.588665  512470 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 09:28:45.588739  512470 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 09:28:45.588812  512470 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 09:28:45.588888  512470 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 09:28:45.718851  512470 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 09:28:45.719010  512470 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 09:28:45.719146  512470 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 09:28:45.735083  512470 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 09:28:44.599702  509018 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:28:44.599728  509018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:28:44.599789  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:44.609023  509018 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:28:44.609043  509018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:28:44.609103  509018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:44.645762  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:44.670809  509018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:45.049079  509018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:28:45.049223  509018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:28:45.124542  509018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:28:45.230415  509018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:28:46.302050  509018 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.252803495s)
	I1026 09:28:46.303045  509018 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:28:46.303151  509018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:28:46.303258  509018 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.25414126s)
	I1026 09:28:46.303294  509018 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 09:28:46.654137  509018 api_server.go:72] duration metric: took 2.12345889s to wait for apiserver process to appear ...
	I1026 09:28:46.654206  509018 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:28:46.654247  509018 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 09:28:46.654083  509018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.423553213s)
	I1026 09:28:46.655222  509018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.530651651s)
	I1026 09:28:46.676315  509018 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:28:46.678120  509018 api_server.go:141] control plane version: v1.34.1
	I1026 09:28:46.678190  509018 api_server.go:131] duration metric: took 23.963096ms to wait for apiserver health ...
	I1026 09:28:46.678214  509018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:28:46.685660  509018 system_pods.go:59] 8 kube-system pods found
	I1026 09:28:46.685689  509018 system_pods.go:61] "coredns-66bc5c9577-ls7nq" [62473023-ba0e-4958-991d-1a2cde76799e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:28:46.685698  509018 system_pods.go:61] "etcd-newest-cni-596581" [264f14d5-6146-4dcb-9f23-d72280bb5ea2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:28:46.685705  509018 system_pods.go:61] "kindnet-2j87q" [8ac4e2f0-aa4a-4f25-9328-aefbce3cde40] Running
	I1026 09:28:46.685713  509018 system_pods.go:61] "kube-apiserver-newest-cni-596581" [cbc6496d-a07d-4174-a276-5e1829b8b8b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:28:46.685719  509018 system_pods.go:61] "kube-controller-manager-newest-cni-596581" [23925cb0-7d94-4d90-8550-de65406a9bc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:28:46.685726  509018 system_pods.go:61] "kube-proxy-72xqz" [bbd599f1-02d6-4a30-b5d6-a2d81d11c10e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 09:28:46.685730  509018 system_pods.go:61] "kube-scheduler-newest-cni-596581" [644365d1-94e6-4b78-84e9-fae9ef2bfb9e] Running
	I1026 09:28:46.685737  509018 system_pods.go:61] "storage-provisioner" [f949f69f-a15f-4d9d-b1b7-5f29bed135bf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:28:46.685755  509018 system_pods.go:74] duration metric: took 7.51168ms to wait for pod list to return data ...
	I1026 09:28:46.685765  509018 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:28:46.688029  509018 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:28:46.690824  509018 default_sa.go:45] found service account: "default"
	I1026 09:28:46.690843  509018 default_sa.go:55] duration metric: took 5.073109ms for default service account to be created ...
	I1026 09:28:46.690856  509018 kubeadm.go:586] duration metric: took 2.160178034s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:28:46.690872  509018 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:28:46.691326  509018 addons.go:514] duration metric: took 2.160244825s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:28:46.694382  509018 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:28:46.694406  509018 node_conditions.go:123] node cpu capacity is 2
	I1026 09:28:46.694418  509018 node_conditions.go:105] duration metric: took 3.540805ms to run NodePressure ...
	I1026 09:28:46.694430  509018 start.go:241] waiting for startup goroutines ...
	I1026 09:28:46.807778  509018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-596581" context rescaled to 1 replicas
	I1026 09:28:46.807808  509018 start.go:246] waiting for cluster config update ...
	I1026 09:28:46.807819  509018 start.go:255] writing updated cluster config ...
	I1026 09:28:46.808103  509018 ssh_runner.go:195] Run: rm -f paused
	I1026 09:28:46.896345  509018 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:28:46.899803  509018 out.go:179] * Done! kubectl is now configured to use "newest-cni-596581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.59047802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.601601283Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4f01bc57-1a5d-46cb-af92-18cb20f6491c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.614594389Z" level=info msg="Ran pod sandbox 53ed7ab0d2a0fb190c905cc623dd40459bb8647b5687bea5ede4a66ca91a7475 with infra container: kube-system/kindnet-2j87q/POD" id=4f01bc57-1a5d-46cb-af92-18cb20f6491c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.631759094Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=778017b4-5008-4a51-a882-5b3bf54d71c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.633222358Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2e97365f-61b4-48fb-b367-51985876e0f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.661916478Z" level=info msg="Creating container: kube-system/kindnet-2j87q/kindnet-cni" id=d627d7a7-699e-4426-9d4c-a1a4190f2ddb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.662735508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.680147339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.681059047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.701964265Z" level=info msg="Created container d734484eec82271ddc197ca7e84d2fee1b8ecb95d0dfab8383d3fffd35216b0b: kube-system/kindnet-2j87q/kindnet-cni" id=d627d7a7-699e-4426-9d4c-a1a4190f2ddb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.705437828Z" level=info msg="Starting container: d734484eec82271ddc197ca7e84d2fee1b8ecb95d0dfab8383d3fffd35216b0b" id=75cfdef4-665d-4ff2-a55e-991b6099576a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.713672143Z" level=info msg="Started container" PID=1487 containerID=d734484eec82271ddc197ca7e84d2fee1b8ecb95d0dfab8383d3fffd35216b0b description=kube-system/kindnet-2j87q/kindnet-cni id=75cfdef4-665d-4ff2-a55e-991b6099576a name=/runtime.v1.RuntimeService/StartContainer sandboxID=53ed7ab0d2a0fb190c905cc623dd40459bb8647b5687bea5ede4a66ca91a7475
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.832233476Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-72xqz/POD" id=cbf018ae-53dd-4259-a2c0-6c16296b8736 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.832296673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.843347821Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cbf018ae-53dd-4259-a2c0-6c16296b8736 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.871809831Z" level=info msg="Ran pod sandbox 369da4ff86c9c53e68adea5804d868c794ecc0516e9480af6e1d26b55e3f4091 with infra container: kube-system/kube-proxy-72xqz/POD" id=cbf018ae-53dd-4259-a2c0-6c16296b8736 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.8914942Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=71f32705-026a-45f2-9002-3991299f41f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.892964357Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6ccae006-eb0f-42c8-8960-8e0625377ea9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.914088408Z" level=info msg="Creating container: kube-system/kube-proxy-72xqz/kube-proxy" id=9a5fa798-9096-41b4-8084-0991224e637e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.914194707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.943210784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.951128664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.995215609Z" level=info msg="Created container 9933700ddfbb5b32f1843efd269f1e2ec1e38f2181a6ed8d47d48529a1a34713: kube-system/kube-proxy-72xqz/kube-proxy" id=9a5fa798-9096-41b4-8084-0991224e637e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:28:45 newest-cni-596581 crio[836]: time="2025-10-26T09:28:45.997829018Z" level=info msg="Starting container: 9933700ddfbb5b32f1843efd269f1e2ec1e38f2181a6ed8d47d48529a1a34713" id=441ad9ec-2969-4fa5-b5b4-0428924d0f24 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:28:46 newest-cni-596581 crio[836]: time="2025-10-26T09:28:46.027206732Z" level=info msg="Started container" PID=1516 containerID=9933700ddfbb5b32f1843efd269f1e2ec1e38f2181a6ed8d47d48529a1a34713 description=kube-system/kube-proxy-72xqz/kube-proxy id=441ad9ec-2969-4fa5-b5b4-0428924d0f24 name=/runtime.v1.RuntimeService/StartContainer sandboxID=369da4ff86c9c53e68adea5804d868c794ecc0516e9480af6e1d26b55e3f4091
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9933700ddfbb5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   369da4ff86c9c       kube-proxy-72xqz                            kube-system
	d734484eec822       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   53ed7ab0d2a0f       kindnet-2j87q                               kube-system
	3664fa6312155       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   0                   5d22f5135b977       kube-controller-manager-newest-cni-596581   kube-system
	bc42592e14914       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      0                   284dd5651d551       etcd-newest-cni-596581                      kube-system
	787cf81f4a807       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            0                   bf4954a9998fa       kube-scheduler-newest-cni-596581            kube-system
	f658525d509fd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            0                   ac321497137ad       kube-apiserver-newest-cni-596581            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-596581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-596581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-596581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_28_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:28:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-596581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:28:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:28:39 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:28:39 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:28:39 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 09:28:39 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-596581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                93ba9819-a70b-44ae-b5e4-6adc0588dffe
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-596581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-2j87q                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-596581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-596581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-72xqz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-596581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x8 over 20s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-596581 event: Registered Node newest-cni-596581 in Controller
	
	
	==> dmesg <==
	[ +24.516567] overlayfs: idmapped layers are currently not supported
	[ +10.940525] overlayfs: idmapped layers are currently not supported
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	[Oct26 09:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bc42592e149149096f24d7c438415a8ca1818e9243bf75b810f2a55e9a859b8f] <==
	{"level":"warn","ts":"2025-10-26T09:28:33.799700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:33.827091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:33.873804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:33.903292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:33.936610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:33.964330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.023509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.030067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.077817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.103583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.133960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.170112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.200668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.238049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.267877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.281431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.299161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.316629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.359223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.373132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.398227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.427859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.453672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.474507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:28:34.723207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:28:48 up  3:11,  0 user,  load average: 3.98, 3.65, 3.08
	Linux newest-cni-596581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d734484eec82271ddc197ca7e84d2fee1b8ecb95d0dfab8383d3fffd35216b0b] <==
	I1026 09:28:45.812880       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:28:45.813098       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:28:45.813215       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:28:45.813225       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:28:45.813238       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:28:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:28:46.104387       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:28:46.104404       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:28:46.104412       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:28:46.104675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f658525d509fd239a998820aed5181d06c794941fac927507850923f3c59d43f] <==
	I1026 09:28:37.046932       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1026 09:28:37.061481       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:28:37.063156       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:28:37.063307       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 09:28:37.067734       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:28:37.073355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:28:37.074276       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:28:37.075430       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 09:28:37.626855       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 09:28:37.633171       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 09:28:37.633255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:28:38.378072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:28:38.439550       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:28:38.545316       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 09:28:38.553095       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 09:28:38.554315       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:28:38.559892       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:28:38.966678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:28:39.494013       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:28:39.511347       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 09:28:39.546255       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:28:44.753583       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:28:44.787487       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 09:28:44.956287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 09:28:45.008342       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3664fa631215526ff7171e73fc81d9b796bb7aeab8484ae27bdf6c10c8f7f312] <==
	I1026 09:28:44.130282       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:28:44.132583       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 09:28:44.151088       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:28:44.163918       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 09:28:44.164084       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 09:28:44.164197       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 09:28:44.164235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:28:44.166811       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 09:28:44.166903       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 09:28:44.170980       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:28:44.172502       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:28:44.182463       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 09:28:44.182545       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 09:28:44.182604       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 09:28:44.182639       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 09:28:44.182648       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 09:28:44.182654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 09:28:44.183428       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:28:44.183481       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:28:44.194425       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-596581" podCIDRs=["10.42.0.0/24"]
	I1026 09:28:44.200945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 09:28:44.225531       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:28:44.261313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:28:44.261415       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:28:44.261448       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9933700ddfbb5b32f1843efd269f1e2ec1e38f2181a6ed8d47d48529a1a34713] <==
	I1026 09:28:46.177064       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:28:46.365649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:28:46.565883       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:28:46.565924       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:28:46.566021       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:28:46.706337       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:28:46.706453       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:28:46.715759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:28:46.716153       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:28:46.716212       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:28:46.718853       1 config.go:200] "Starting service config controller"
	I1026 09:28:46.718918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:28:46.722863       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:28:46.722958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:28:46.723007       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:28:46.723045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:28:46.723751       1 config.go:309] "Starting node config controller"
	I1026 09:28:46.726038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:28:46.726114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:28:46.821509       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:28:46.824418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:28:46.824456       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [787cf81f4a807c6ee6a34afa2f8d357174eccdf4f7a55ebadfe846dd22bc29f4] <==
	E1026 09:28:36.966478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:28:36.966681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 09:28:36.966751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 09:28:36.966793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 09:28:36.966832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 09:28:36.966882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:28:36.966925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 09:28:36.966963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:28:36.967008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 09:28:36.967046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 09:28:36.967087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:28:36.967130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 09:28:36.967178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:28:36.967218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 09:28:36.967262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 09:28:36.982991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 09:28:37.893480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 09:28:37.896985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 09:28:37.975029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 09:28:37.985156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 09:28:38.022900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 09:28:38.059470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 09:28:38.067067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 09:28:38.091314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1026 09:28:39.934274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032824    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/093df9f5a64eb7e1ffebb7ec79c05ce3-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-596581\" (UID: \"093df9f5a64eb7e1ffebb7ec79c05ce3\") " pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032852    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f648dfa3e94427d1d2a08beb378397e9-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-596581\" (UID: \"f648dfa3e94427d1d2a08beb378397e9\") " pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032881    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f648dfa3e94427d1d2a08beb378397e9-kubeconfig\") pod \"kube-controller-manager-newest-cni-596581\" (UID: \"f648dfa3e94427d1d2a08beb378397e9\") " pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032920    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f648dfa3e94427d1d2a08beb378397e9-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-596581\" (UID: \"f648dfa3e94427d1d2a08beb378397e9\") " pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032942    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f648dfa3e94427d1d2a08beb378397e9-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-596581\" (UID: \"f648dfa3e94427d1d2a08beb378397e9\") " pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032969    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/093df9f5a64eb7e1ffebb7ec79c05ce3-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-596581\" (UID: \"093df9f5a64eb7e1ffebb7ec79c05ce3\") " pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.032994    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f648dfa3e94427d1d2a08beb378397e9-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-596581\" (UID: \"f648dfa3e94427d1d2a08beb378397e9\") " pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.033015    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/6a6dca07f229ebbb72447b061f342896-etcd-data\") pod \"etcd-newest-cni-596581\" (UID: \"6a6dca07f229ebbb72447b061f342896\") " pod="kube-system/etcd-newest-cni-596581"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.306098    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-596581" podStartSLOduration=1.306080784 podStartE2EDuration="1.306080784s" podCreationTimestamp="2025-10-26 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:40.267927127 +0000 UTC m=+0.884540866" watchObservedRunningTime="2025-10-26 09:28:40.306080784 +0000 UTC m=+0.922694523"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.306252    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-596581" podStartSLOduration=1.306244052 podStartE2EDuration="1.306244052s" podCreationTimestamp="2025-10-26 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:40.30584591 +0000 UTC m=+0.922459649" watchObservedRunningTime="2025-10-26 09:28:40.306244052 +0000 UTC m=+0.922857783"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.326813    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-596581" podStartSLOduration=1.326788002 podStartE2EDuration="1.326788002s" podCreationTimestamp="2025-10-26 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:40.326298912 +0000 UTC m=+0.942912651" watchObservedRunningTime="2025-10-26 09:28:40.326788002 +0000 UTC m=+0.943401741"
	Oct 26 09:28:40 newest-cni-596581 kubelet[1310]: I1026 09:28:40.378179    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-596581" podStartSLOduration=1.378159801 podStartE2EDuration="1.378159801s" podCreationTimestamp="2025-10-26 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:40.351568257 +0000 UTC m=+0.968181996" watchObservedRunningTime="2025-10-26 09:28:40.378159801 +0000 UTC m=+0.994773540"
	Oct 26 09:28:44 newest-cni-596581 kubelet[1310]: I1026 09:28:44.233067    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 09:28:44 newest-cni-596581 kubelet[1310]: I1026 09:28:44.233857    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401593    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-kube-proxy\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401715    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkpqx\" (UniqueName: \"kubernetes.io/projected/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-kube-api-access-hkpqx\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401741    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-cni-cfg\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401761    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxtvt\" (UniqueName: \"kubernetes.io/projected/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-kube-api-access-rxtvt\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401797    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-lib-modules\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401827    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-xtables-lock\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401869    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-lib-modules\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.401891    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-xtables-lock\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:28:45 newest-cni-596581 kubelet[1310]: I1026 09:28:45.565921    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:28:46 newest-cni-596581 kubelet[1310]: I1026 09:28:46.243089    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2j87q" podStartSLOduration=1.243068861 podStartE2EDuration="1.243068861s" podCreationTimestamp="2025-10-26 09:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:45.969806357 +0000 UTC m=+6.586420088" watchObservedRunningTime="2025-10-26 09:28:46.243068861 +0000 UTC m=+6.859682592"
	Oct 26 09:28:47 newest-cni-596581 kubelet[1310]: I1026 09:28:47.045442    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72xqz" podStartSLOduration=2.045424604 podStartE2EDuration="2.045424604s" podCreationTimestamp="2025-10-26 09:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 09:28:47.045164351 +0000 UTC m=+7.661778090" watchObservedRunningTime="2025-10-26 09:28:47.045424604 +0000 UTC m=+7.662038335"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-596581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ls7nq storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner: exit status 1 (100.650034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ls7nq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-596581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-596581 --alsologtostderr -v=1: exit status 80 (2.509700131s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-596581 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:29:12.803246  517769 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:29:12.803357  517769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:29:12.803367  517769 out.go:374] Setting ErrFile to fd 2...
	I1026 09:29:12.803372  517769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:29:12.803624  517769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:29:12.803896  517769 out.go:368] Setting JSON to false
	I1026 09:29:12.803919  517769 mustload.go:65] Loading cluster: newest-cni-596581
	I1026 09:29:12.804453  517769 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:29:12.804921  517769 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:12.829928  517769 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:12.830243  517769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:29:12.918884  517769 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:29:12.907899811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:29:12.919589  517769 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-596581 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 09:29:12.922917  517769 out.go:179] * Pausing node newest-cni-596581 ... 
	I1026 09:29:12.925718  517769 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:12.926076  517769 ssh_runner.go:195] Run: systemctl --version
	I1026 09:29:12.926127  517769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:12.947814  517769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:13.054114  517769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:29:13.067954  517769 pause.go:52] kubelet running: true
	I1026 09:29:13.068050  517769 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:29:13.302493  517769 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:29:13.302586  517769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:29:13.408468  517769 cri.go:89] found id: "3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709"
	I1026 09:29:13.408493  517769 cri.go:89] found id: "a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d"
	I1026 09:29:13.408498  517769 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:13.408503  517769 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:13.408507  517769 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:13.408510  517769 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:13.408513  517769 cri.go:89] found id: ""
	I1026 09:29:13.408591  517769 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:29:13.421730  517769 retry.go:31] will retry after 169.017489ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:13Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:29:13.591111  517769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:29:13.604991  517769 pause.go:52] kubelet running: false
	I1026 09:29:13.605082  517769 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:29:13.837577  517769 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:29:13.837694  517769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:29:13.923619  517769 cri.go:89] found id: "3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709"
	I1026 09:29:13.923652  517769 cri.go:89] found id: "a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d"
	I1026 09:29:13.923659  517769 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:13.923663  517769 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:13.923693  517769 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:13.923705  517769 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:13.923709  517769 cri.go:89] found id: ""
	I1026 09:29:13.923768  517769 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:29:13.936045  517769 retry.go:31] will retry after 269.262102ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:13Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:29:14.205508  517769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:29:14.219131  517769 pause.go:52] kubelet running: false
	I1026 09:29:14.219201  517769 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:29:14.431764  517769 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:29:14.431863  517769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:29:14.552680  517769 cri.go:89] found id: "3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709"
	I1026 09:29:14.552704  517769 cri.go:89] found id: "a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d"
	I1026 09:29:14.552714  517769 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:14.552718  517769 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:14.552721  517769 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:14.552726  517769 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:14.552729  517769 cri.go:89] found id: ""
	I1026 09:29:14.552779  517769 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:29:14.566114  517769 retry.go:31] will retry after 406.133316ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:14Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:29:14.972716  517769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:29:14.987813  517769 pause.go:52] kubelet running: false
	I1026 09:29:14.987943  517769 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 09:29:15.154315  517769 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 09:29:15.154412  517769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 09:29:15.226097  517769 cri.go:89] found id: "3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709"
	I1026 09:29:15.226174  517769 cri.go:89] found id: "a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d"
	I1026 09:29:15.226194  517769 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:15.226212  517769 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:15.226249  517769 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:15.226275  517769 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:15.226294  517769 cri.go:89] found id: ""
	I1026 09:29:15.226401  517769 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 09:29:15.242216  517769 out.go:203] 
	W1026 09:29:15.245186  517769 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 09:29:15.245209  517769 out.go:285] * 
	* 
	W1026 09:29:15.252335  517769 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 09:29:15.255246  517769 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-596581 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-596581
helpers_test.go:243: (dbg) docker inspect newest-cni-596581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	        "Created": "2025-10-26T09:28:08.038286143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515612,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:28:52.174371128Z",
	            "FinishedAt": "2025-10-26T09:28:51.081523548Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hostname",
	        "HostsPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hosts",
	        "LogPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81-json.log",
	        "Name": "/newest-cni-596581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-596581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-596581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	                "LowerDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-596581",
	                "Source": "/var/lib/docker/volumes/newest-cni-596581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-596581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-596581",
	                "name.minikube.sigs.k8s.io": "newest-cni-596581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15a851535a956bfb4c8ffea215e9d580efb9210e2c86a00d1d570eea8cde14b3",
	            "SandboxKey": "/var/run/docker/netns/15a851535a95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-596581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:cd:31:1e:83:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3579436470ca4b2c8964527b6b8432c0aea2af9e0a0728e90452b5864afaf1c5",
	                    "EndpointID": "a2d9eb3ac50c9e1e0978d5758f69330aaba5c494f8bc4baba5d32098a6e9261b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-596581",
	                        "d784789bf46e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581: exit status 2 (345.532784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25
E1026 09:29:15.652632  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25: (1.080406525s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                                                                                                   │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ no-preload-491604 image list --format=json                                                                                                                                                                                                    │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ pause   │ -p no-preload-491604 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p auto-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-796399        │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ stop    │ -p newest-cni-596581 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-596581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:29 UTC │
	│ image   │ newest-cni-596581 image list --format=json                                                                                                                                                                                                    │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:29 UTC │ 26 Oct 25 09:29 UTC │
	│ pause   │ -p newest-cni-596581 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:28:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:28:51.804316  515472 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:51.804993  515472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:51.805039  515472 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:51.805064  515472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:51.805407  515472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:51.805892  515472 out.go:368] Setting JSON to false
	I1026 09:28:51.807039  515472 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11482,"bootTime":1761459450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:28:51.807142  515472 start.go:141] virtualization:  
	I1026 09:28:51.812235  515472 out.go:179] * [newest-cni-596581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:28:51.815579  515472 notify.go:220] Checking for updates...
	I1026 09:28:51.815551  515472 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:28:51.819268  515472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:28:51.822765  515472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:51.825776  515472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:28:51.828717  515472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:28:51.832202  515472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:28:51.835617  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:51.836203  515472 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:28:51.881598  515472 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:28:51.882008  515472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:51.980835  515472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:51.966330517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:51.980933  515472 docker.go:318] overlay module found
	I1026 09:28:51.984025  515472 out.go:179] * Using the docker driver based on existing profile
	I1026 09:28:51.986984  515472 start.go:305] selected driver: docker
	I1026 09:28:51.987001  515472 start.go:925] validating driver "docker" against &{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:51.987110  515472 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:28:51.987789  515472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:52.085529  515472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:52.069707241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:52.085880  515472 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:28:52.085917  515472 cni.go:84] Creating CNI manager for ""
	I1026 09:28:52.085981  515472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:52.086020  515472 start.go:349] cluster config:
	{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:52.089161  515472 out.go:179] * Starting "newest-cni-596581" primary control-plane node in "newest-cni-596581" cluster
	I1026 09:28:52.092105  515472 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:28:52.095108  515472 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:28:52.097841  515472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:52.097938  515472 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:28:52.097977  515472 cache.go:58] Caching tarball of preloaded images
	I1026 09:28:52.098074  515472 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:28:52.098090  515472 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:28:52.098212  515472 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:52.098470  515472 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:28:52.118399  515472 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:28:52.118423  515472 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:28:52.118452  515472 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:28:52.118481  515472 start.go:360] acquireMachinesLock for newest-cni-596581: {Name:mk457b41350c6ab0aead81b63943ef6522def4bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:28:52.118550  515472 start.go:364] duration metric: took 42.922µs to acquireMachinesLock for "newest-cni-596581"
	I1026 09:28:52.118577  515472 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:28:52.118586  515472 fix.go:54] fixHost starting: 
	I1026 09:28:52.118904  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:52.140485  515472 fix.go:112] recreateIfNeeded on newest-cni-596581: state=Stopped err=<nil>
	W1026 09:28:52.140516  515472 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:28:49.336286  512470 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:28:49.729808  512470 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:28:50.131608  512470 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:28:50.132176  512470 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:28:50.681721  512470 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:28:51.027188  512470 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:28:51.373668  512470 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:28:51.981814  512470 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:28:52.156964  512470 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:28:52.158756  512470 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:28:52.169184  512470 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:28:52.172722  512470 out.go:252]   - Booting up control plane ...
	I1026 09:28:52.172832  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:28:52.172919  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:28:52.174089  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:28:52.198616  512470 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:28:52.198758  512470 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:28:52.207542  512470 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:28:52.207647  512470 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:28:52.207842  512470 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:28:52.397263  512470 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 09:28:52.397395  512470 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 09:28:52.899921  512470 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 504.141333ms
	I1026 09:28:52.906583  512470 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:28:52.906684  512470 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 09:28:52.907014  512470 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:28:52.907103  512470 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 09:28:52.143745  515472 out.go:252] * Restarting existing docker container for "newest-cni-596581" ...
	I1026 09:28:52.143859  515472 cli_runner.go:164] Run: docker start newest-cni-596581
	I1026 09:28:52.511268  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:52.546254  515472 kic.go:430] container "newest-cni-596581" state is running.
	I1026 09:28:52.546639  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:52.574737  515472 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:52.574956  515472 machine.go:93] provisionDockerMachine start ...
	I1026 09:28:52.575020  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:52.595714  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:52.596037  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:52.596047  515472 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:28:52.596745  515472 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52048->127.0.0.1:33470: read: connection reset by peer
	I1026 09:28:55.795215  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:55.795258  515472 ubuntu.go:182] provisioning hostname "newest-cni-596581"
	I1026 09:28:55.795373  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:55.816837  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:55.817132  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:55.817144  515472 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-596581 && echo "newest-cni-596581" | sudo tee /etc/hostname
	I1026 09:28:56.013757  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:56.013929  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.043241  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:56.043555  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:56.043578  515472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-596581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-596581/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-596581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:28:56.224267  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:28:56.224294  515472 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:28:56.224348  515472 ubuntu.go:190] setting up certificates
	I1026 09:28:56.224359  515472 provision.go:84] configureAuth start
	I1026 09:28:56.224462  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:56.252313  515472 provision.go:143] copyHostCerts
	I1026 09:28:56.252380  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:28:56.252396  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:28:56.252470  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:28:56.252570  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:28:56.252576  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:28:56.252600  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:28:56.252686  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:28:56.252691  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:28:56.252713  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:28:56.252766  515472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.newest-cni-596581 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-596581]
	I1026 09:28:56.726956  515472 provision.go:177] copyRemoteCerts
	I1026 09:28:56.727072  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:28:56.727132  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.744777  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:56.850685  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:28:56.875890  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:28:56.905265  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:28:56.936025  515472 provision.go:87] duration metric: took 711.638488ms to configureAuth
	I1026 09:28:56.936101  515472 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:28:56.936354  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:56.936509  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.963225  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:56.963531  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:56.963546  515472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:28:57.355674  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:28:57.355698  515472 machine.go:96] duration metric: took 4.780733213s to provisionDockerMachine
	I1026 09:28:57.355710  515472 start.go:293] postStartSetup for "newest-cni-596581" (driver="docker")
	I1026 09:28:57.355741  515472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:28:57.355846  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:28:57.355911  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.388431  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.511963  515472 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:28:57.515618  515472 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:28:57.515654  515472 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:28:57.515665  515472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:28:57.515719  515472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:28:57.515801  515472 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:28:57.515905  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:28:57.532040  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:57.559583  515472 start.go:296] duration metric: took 203.837391ms for postStartSetup
	I1026 09:28:57.559685  515472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:28:57.559770  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.590891  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.704286  515472 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:28:57.712704  515472 fix.go:56] duration metric: took 5.594109766s for fixHost
	I1026 09:28:57.712727  515472 start.go:83] releasing machines lock for "newest-cni-596581", held for 5.594162033s
	I1026 09:28:57.712801  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:57.766986  515472 ssh_runner.go:195] Run: cat /version.json
	I1026 09:28:57.767033  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.767271  515472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:28:57.767325  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.805145  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.808067  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:58.051678  515472 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:58.059067  515472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:28:58.131174  515472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:28:58.139982  515472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:28:58.140090  515472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:28:58.147879  515472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:28:58.147903  515472 start.go:495] detecting cgroup driver to use...
	I1026 09:28:58.147963  515472 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:28:58.148030  515472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:28:58.163267  515472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:28:58.177412  515472 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:28:58.177513  515472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:28:58.193836  515472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:28:58.210410  515472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:28:58.411592  515472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:28:58.628760  515472 docker.go:234] disabling docker service ...
	I1026 09:28:58.628860  515472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:28:58.650131  515472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:28:58.684432  515472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:28:58.874764  515472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:28:59.106040  515472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:28:59.126273  515472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:28:59.144353  515472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:28:59.144476  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.161395  515472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:28:59.161503  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.185362  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.197707  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.216519  515472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:28:59.225926  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.235011  515472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.243797  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.255908  515472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:28:59.267759  515472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:28:59.279702  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:59.479201  515472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:28:59.682445  515472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:28:59.682543  515472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:28:59.691658  515472 start.go:563] Will wait 60s for crictl version
	I1026 09:28:59.691753  515472 ssh_runner.go:195] Run: which crictl
	I1026 09:28:59.695268  515472 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:28:59.765413  515472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:28:59.765538  515472 ssh_runner.go:195] Run: crio --version
	I1026 09:28:59.819540  515472 ssh_runner.go:195] Run: crio --version
	I1026 09:28:59.875543  515472 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:28:59.879353  515472 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:59.906381  515472 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:28:59.912517  515472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:59.926578  515472 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 09:28:59.929485  515472 kubeadm.go:883] updating cluster {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:28:59.929647  515472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:59.929722  515472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:59.979242  515472 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:59.979270  515472 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:28:59.979330  515472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:29:00.017498  515472 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:29:00.017604  515472 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:29:00.017629  515472 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:29:00.017788  515472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-596581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:29:00.017929  515472 ssh_runner.go:195] Run: crio config
	I1026 09:29:00.125101  515472 cni.go:84] Creating CNI manager for ""
	I1026 09:29:00.125183  515472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:29:00.125222  515472 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 09:29:00.125284  515472 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-596581 NodeName:newest-cni-596581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:29:00.125488  515472 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-596581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:29:00.125612  515472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:29:00.143591  515472 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:29:00.143749  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:29:00.156632  515472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:29:00.182247  515472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:29:00.208977  515472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 09:29:00.250316  515472 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:29:00.258486  515472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:29:00.291693  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:00.515934  515472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:00.553661  515472 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581 for IP: 192.168.76.2
	I1026 09:29:00.553737  515472 certs.go:195] generating shared ca certs ...
	I1026 09:29:00.553769  515472 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:00.553937  515472 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:29:00.554020  515472 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:29:00.554054  515472 certs.go:257] generating profile certs ...
	I1026 09:29:00.554184  515472 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key
	I1026 09:29:00.554302  515472 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff
	I1026 09:29:00.554391  515472 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key
	I1026 09:29:00.554553  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:29:00.554615  515472 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:29:00.554640  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:29:00.554698  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:29:00.554791  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:29:00.554855  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:29:00.554949  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:29:00.560324  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:29:00.601812  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:29:00.654099  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:29:00.689112  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:29:00.743800  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:29:00.768991  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:29:00.789605  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:29:00.816062  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:29:00.851865  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:29:00.897639  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:29:00.941576  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:29:00.977751  515472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:29:00.990598  515472 ssh_runner.go:195] Run: openssl version
	I1026 09:29:00.999962  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:29:01.010901  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.016297  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.016419  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.076188  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:29:01.091687  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:29:01.101879  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.107300  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.107389  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.153044  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:29:01.164905  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:29:01.179404  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.184449  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.184558  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.230661  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:29:01.239574  515472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:29:01.244054  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:29:01.294677  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:29:01.410586  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:29:01.563354  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:29:01.629306  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:29:01.736974  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:29:01.821284  515472 kubeadm.go:400] StartCluster: {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:29:01.821434  515472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:29:01.821539  515472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:29:01.899261  515472 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:01.899324  515472 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:01.899354  515472 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:01.899374  515472 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:01.899395  515472 cri.go:89] found id: ""
	I1026 09:29:01.899479  515472 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:29:01.944828  515472 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:01Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:29:01.944991  515472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:29:01.978627  515472 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:29:01.978701  515472 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:29:01.978796  515472 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:29:02.011613  515472 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:29:02.012138  515472 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-596581" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:02.012320  515472 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-596581" cluster setting kubeconfig missing "newest-cni-596581" context setting]
	I1026 09:29:02.012684  515472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.014413  515472 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:29:02.038291  515472 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 09:29:02.038374  515472 kubeadm.go:601] duration metric: took 59.631248ms to restartPrimaryControlPlane
	I1026 09:29:02.038398  515472 kubeadm.go:402] duration metric: took 217.123859ms to StartCluster
	I1026 09:29:02.038436  515472 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.038521  515472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:02.039244  515472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.039509  515472 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:29:02.039854  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:29:02.039935  515472 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:29:02.040090  515472 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-596581"
	I1026 09:29:02.040150  515472 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-596581"
	W1026 09:29:02.040218  515472 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:29:02.040262  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.040981  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.041195  515472 addons.go:69] Setting dashboard=true in profile "newest-cni-596581"
	I1026 09:29:02.041235  515472 addons.go:238] Setting addon dashboard=true in "newest-cni-596581"
	W1026 09:29:02.041266  515472 addons.go:247] addon dashboard should already be in state true
	I1026 09:29:02.041304  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.041580  515472 addons.go:69] Setting default-storageclass=true in profile "newest-cni-596581"
	I1026 09:29:02.041663  515472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-596581"
	I1026 09:29:02.041937  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.042340  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.047867  515472 out.go:179] * Verifying Kubernetes components...
	I1026 09:29:02.050975  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:02.106837  515472 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:29:02.111667  515472 addons.go:238] Setting addon default-storageclass=true in "newest-cni-596581"
	W1026 09:29:02.111705  515472 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:29:02.111739  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.112217  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.115316  515472 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:02.115336  515472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:29:02.115395  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.125910  515472 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:29:02.130480  515472 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:28:59.727285  512470 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.819975995s
	I1026 09:29:01.406831  512470 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.495820615s
	I1026 09:29:02.908807  512470 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001738263s
	I1026 09:29:02.959727  512470 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:29:03.049222  512470 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:29:03.096320  512470 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:29:03.097011  512470 kubeadm.go:318] [mark-control-plane] Marking the node auto-796399 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:29:03.127223  512470 kubeadm.go:318] [bootstrap-token] Using token: sd9xx6.ik4a8kq1wzxesora
	I1026 09:29:03.131012  512470 out.go:252]   - Configuring RBAC rules ...
	I1026 09:29:03.131166  512470 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:29:03.149519  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:29:03.168322  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:29:03.179360  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:29:03.187729  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:29:03.199130  512470 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:29:03.320313  512470 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:29:03.959116  512470 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:29:04.340806  512470 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:29:04.341926  512470 kubeadm.go:318] 
	I1026 09:29:04.342030  512470 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:29:04.342037  512470 kubeadm.go:318] 
	I1026 09:29:04.342126  512470 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:29:04.342131  512470 kubeadm.go:318] 
	I1026 09:29:04.342170  512470 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:29:04.342238  512470 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:29:04.342296  512470 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:29:04.342301  512470 kubeadm.go:318] 
	I1026 09:29:04.342367  512470 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:29:04.342372  512470 kubeadm.go:318] 
	I1026 09:29:04.342429  512470 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:29:04.342439  512470 kubeadm.go:318] 
	I1026 09:29:04.342500  512470 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:29:04.342591  512470 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:29:04.342673  512470 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:29:04.342684  512470 kubeadm.go:318] 
	I1026 09:29:04.347290  512470 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:29:04.347398  512470 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:29:04.347404  512470 kubeadm.go:318] 
	I1026 09:29:04.347502  512470 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sd9xx6.ik4a8kq1wzxesora \
	I1026 09:29:04.347628  512470 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:29:04.347656  512470 kubeadm.go:318] 	--control-plane 
	I1026 09:29:04.347664  512470 kubeadm.go:318] 
	I1026 09:29:04.347764  512470 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:29:04.347769  512470 kubeadm.go:318] 
	I1026 09:29:04.347867  512470 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sd9xx6.ik4a8kq1wzxesora \
	I1026 09:29:04.347987  512470 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:29:04.356245  512470 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:29:04.356709  512470 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:29:04.356943  512470 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:29:04.357003  512470 cni.go:84] Creating CNI manager for ""
	I1026 09:29:04.357034  512470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:29:04.362241  512470 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:29:02.133325  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:29:02.133369  515472 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:29:02.133456  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.158873  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.191030  515472 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:02.191051  515472 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:29:02.191045  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.191119  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.220701  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.585853  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:29:02.585917  515472 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:29:02.593490  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:02.647077  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:29:02.647144  515472 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:29:02.694831  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:29:02.694902  515472 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:29:02.708428  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:02.811654  515472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:02.822411  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:29:02.822484  515472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:29:02.886069  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:29:02.886143  515472 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:29:03.024568  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:29:03.024643  515472 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:29:03.120997  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:29:03.121020  515472 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:29:03.235878  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:29:03.235910  515472 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:29:03.271781  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:29:03.271807  515472 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:29:03.307986  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:29:04.365172  512470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:29:04.379866  512470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:29:04.379898  512470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:29:04.435236  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:29:05.201671  512470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:29:05.201808  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:05.201879  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-796399 minikube.k8s.io/updated_at=2025_10_26T09_29_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=auto-796399 minikube.k8s.io/primary=true
	I1026 09:29:05.612682  512470 ops.go:34] apiserver oom_adj: -16
	I1026 09:29:05.612812  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:06.113871  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:06.613368  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:07.113446  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:07.612887  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:08.113437  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:08.383555  512470 kubeadm.go:1113] duration metric: took 3.181790393s to wait for elevateKubeSystemPrivileges
	I1026 09:29:08.383587  512470 kubeadm.go:402] duration metric: took 23.218859979s to StartCluster
	I1026 09:29:08.383605  512470 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:08.383671  512470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:08.384638  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:08.384863  512470 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:29:08.384955  512470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:29:08.385215  512470 config.go:182] Loaded profile config "auto-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:29:08.385228  512470 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:29:08.385311  512470 addons.go:69] Setting storage-provisioner=true in profile "auto-796399"
	I1026 09:29:08.385328  512470 addons.go:238] Setting addon storage-provisioner=true in "auto-796399"
	I1026 09:29:08.385354  512470 host.go:66] Checking if "auto-796399" exists ...
	I1026 09:29:08.385860  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.386034  512470 addons.go:69] Setting default-storageclass=true in profile "auto-796399"
	I1026 09:29:08.386051  512470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-796399"
	I1026 09:29:08.386297  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.391013  512470 out.go:179] * Verifying Kubernetes components...
	I1026 09:29:08.394069  512470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:08.427573  512470 addons.go:238] Setting addon default-storageclass=true in "auto-796399"
	I1026 09:29:08.427613  512470 host.go:66] Checking if "auto-796399" exists ...
	I1026 09:29:08.428064  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.429411  512470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:29:08.432361  512470 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:08.432395  512470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:29:08.432473  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:29:08.464665  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:29:08.476448  512470 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:08.476469  512470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:29:08.476538  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:29:08.502361  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:29:08.988110  512470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:08.995536  512470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:09.243186  512470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:09.243436  512470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:29:10.828981  512470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.840768429s)
	I1026 09:29:10.829108  512470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.833507123s)
	I1026 09:29:10.829177  512470 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.585924698s)
	I1026 09:29:10.830466  512470 node_ready.go:35] waiting up to 15m0s for node "auto-796399" to be "Ready" ...
	I1026 09:29:10.829186  512470 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.585660029s)
	I1026 09:29:10.830864  512470 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 09:29:10.903179  512470 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:29:11.904126  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.310563856s)
	I1026 09:29:11.904241  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.195733467s)
	I1026 09:29:11.904271  515472 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.09254511s)
	I1026 09:29:11.904568  515472 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:29:11.904625  515472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:29:11.904369  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.596338697s)
	I1026 09:29:11.907648  515472 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-596581 addons enable metrics-server
	
	I1026 09:29:11.923741  515472 api_server.go:72] duration metric: took 9.884162892s to wait for apiserver process to appear ...
	I1026 09:29:11.923766  515472 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:29:11.923787  515472 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 09:29:11.937522  515472 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:29:11.939190  515472 api_server.go:141] control plane version: v1.34.1
	I1026 09:29:11.939223  515472 api_server.go:131] duration metric: took 15.449672ms to wait for apiserver health ...
	I1026 09:29:11.939233  515472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:29:11.943630  515472 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 09:29:11.946013  515472 system_pods.go:59] 8 kube-system pods found
	I1026 09:29:11.946050  515472 system_pods.go:61] "coredns-66bc5c9577-ls7nq" [62473023-ba0e-4958-991d-1a2cde76799e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:29:11.946062  515472 system_pods.go:61] "etcd-newest-cni-596581" [264f14d5-6146-4dcb-9f23-d72280bb5ea2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:29:11.946068  515472 system_pods.go:61] "kindnet-2j87q" [8ac4e2f0-aa4a-4f25-9328-aefbce3cde40] Running
	I1026 09:29:11.946076  515472 system_pods.go:61] "kube-apiserver-newest-cni-596581" [cbc6496d-a07d-4174-a276-5e1829b8b8b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:29:11.946083  515472 system_pods.go:61] "kube-controller-manager-newest-cni-596581" [23925cb0-7d94-4d90-8550-de65406a9bc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:29:11.946088  515472 system_pods.go:61] "kube-proxy-72xqz" [bbd599f1-02d6-4a30-b5d6-a2d81d11c10e] Running
	I1026 09:29:11.946094  515472 system_pods.go:61] "kube-scheduler-newest-cni-596581" [644365d1-94e6-4b78-84e9-fae9ef2bfb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:29:11.946099  515472 system_pods.go:61] "storage-provisioner" [f949f69f-a15f-4d9d-b1b7-5f29bed135bf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:29:11.946106  515472 system_pods.go:74] duration metric: took 6.867158ms to wait for pod list to return data ...
	I1026 09:29:11.946115  515472 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:29:11.947290  515472 addons.go:514] duration metric: took 9.907352521s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 09:29:11.948867  515472 default_sa.go:45] found service account: "default"
	I1026 09:29:11.948886  515472 default_sa.go:55] duration metric: took 2.766157ms for default service account to be created ...
	I1026 09:29:11.948897  515472 kubeadm.go:586] duration metric: took 9.909325635s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:29:11.948912  515472 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:29:11.951474  515472 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:29:11.951499  515472 node_conditions.go:123] node cpu capacity is 2
	I1026 09:29:11.951511  515472 node_conditions.go:105] duration metric: took 2.594192ms to run NodePressure ...
	I1026 09:29:11.951523  515472 start.go:241] waiting for startup goroutines ...
	I1026 09:29:11.951530  515472 start.go:246] waiting for cluster config update ...
	I1026 09:29:11.951541  515472 start.go:255] writing updated cluster config ...
	I1026 09:29:11.951837  515472 ssh_runner.go:195] Run: rm -f paused
	I1026 09:29:12.030461  515472 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:29:12.033770  515472 out.go:179] * Done! kubectl is now configured to use "newest-cni-596581" cluster and "default" namespace by default
	I1026 09:29:10.906127  512470 addons.go:514] duration metric: took 2.520895084s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:29:11.341664  512470 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-796399" context rescaled to 1 replicas
	W1026 09:29:12.833758  512470 node_ready.go:57] node "auto-796399" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.445691845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.455838571Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4d8e2e92-a904-42e2-940d-c9242b1654e5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.469323449Z" level=info msg="Ran pod sandbox 991bf2acf82be1c43a57d787cd7d567a8e4bd5796b6001eb90562fe546169249 with infra container: kube-system/kindnet-2j87q/POD" id=4d8e2e92-a904-42e2-940d-c9242b1654e5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.47038967Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-72xqz/POD" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.470507768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.485448494Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=43430853-ef9d-4ba1-943e-f6a18adb81b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.49205439Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.50293829Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0ebcb1fc-0bc7-422e-bf52-5983c992d2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.523434461Z" level=info msg="Creating container: kube-system/kindnet-2j87q/kindnet-cni" id=d48cf4ae-0b75-471b-b4c9-9a6ebc32c952 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.523894996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.539363409Z" level=info msg="Ran pod sandbox 09a691a853d9df7ec0194a5646de262c730b02db1275e7ea99225c60d2d4b4d5 with infra container: kube-system/kube-proxy-72xqz/POD" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.594967019Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6b4a872d-ca23-4f5b-906c-cb614202d7ef name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.606406078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.607789464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.612215024Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=afb691e0-3acf-4892-b64b-51cd65c48a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.628365529Z" level=info msg="Creating container: kube-system/kube-proxy-72xqz/kube-proxy" id=213bf576-ef0c-4d8f-b923-9b70d79a11aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.628502573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.63616893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.657460155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.715936878Z" level=info msg="Created container a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d: kube-system/kindnet-2j87q/kindnet-cni" id=d48cf4ae-0b75-471b-b4c9-9a6ebc32c952 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.723390834Z" level=info msg="Starting container: a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d" id=503727c9-89aa-46b9-a57b-3e366b645158 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.727789661Z" level=info msg="Started container" PID=1057 containerID=a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d description=kube-system/kindnet-2j87q/kindnet-cni id=503727c9-89aa-46b9-a57b-3e366b645158 name=/runtime.v1.RuntimeService/StartContainer sandboxID=991bf2acf82be1c43a57d787cd7d567a8e4bd5796b6001eb90562fe546169249
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.746468511Z" level=info msg="Created container 3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709: kube-system/kube-proxy-72xqz/kube-proxy" id=213bf576-ef0c-4d8f-b923-9b70d79a11aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.755640209Z" level=info msg="Starting container: 3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709" id=3eb9480a-5f8a-4710-a322-f3a92fc35d9c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.765379701Z" level=info msg="Started container" PID=1060 containerID=3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709 description=kube-system/kube-proxy-72xqz/kube-proxy id=3eb9480a-5f8a-4710-a322-f3a92fc35d9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=09a691a853d9df7ec0194a5646de262c730b02db1275e7ea99225c60d2d4b4d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b64b58362f0b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   09a691a853d9d       kube-proxy-72xqz                            kube-system
	a0f17c416b5a1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   991bf2acf82be       kindnet-2j87q                               kube-system
	d16d844e0356c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   8b848a101bb09       kube-controller-manager-newest-cni-596581   kube-system
	f395de9cd02d7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   30cb4995906e6       kube-apiserver-newest-cni-596581            kube-system
	6d54c056352f2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   34448a782213a       etcd-newest-cni-596581                      kube-system
	8e7b50e63ed8f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   3ba42f3d100ef       kube-scheduler-newest-cni-596581            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-596581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-596581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-596581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_28_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:28:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-596581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:29:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-596581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                93ba9819-a70b-44ae-b5e4-6adc0588dffe
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-596581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-2j87q                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-596581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-596581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-72xqz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-596581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 48s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 48s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     37s                kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-596581 event: Registered Node newest-cni-596581 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-596581 event: Registered Node newest-cni-596581 in Controller
	
	
	==> dmesg <==
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	[Oct26 09:28] overlayfs: idmapped layers are currently not supported
	[ +24.446098] overlayfs: idmapped layers are currently not supported
	[Oct26 09:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93] <==
	{"level":"warn","ts":"2025-10-26T09:29:06.279243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.308264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.338381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.355384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.374511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.394649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.414966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.434869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.499255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.542835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.571595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.598507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.655753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.766871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.797201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.825973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.845179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.919938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.950365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.973325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.027135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.081014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.147570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.216570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.357335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:29:16 up  3:11,  0 user,  load average: 7.53, 4.47, 3.37
	Linux newest-cni-596581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d] <==
	I1026 09:29:10.846285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:29:10.846541       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:29:10.846648       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:29:10.846659       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:29:10.846671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:29:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:29:11.052184       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:29:11.052204       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:29:11.052213       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:29:11.052508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8] <==
	I1026 09:29:09.799390       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:29:09.799457       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:29:09.799497       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:29:09.777140       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:29:09.800076       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:29:09.823128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:29:09.824179       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:29:09.824194       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:29:09.824201       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:29:09.824208       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:29:09.826506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:29:09.842453       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1026 09:29:09.915095       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:29:10.157304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:29:10.264598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:29:11.421787       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:29:11.513626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:29:11.560767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:29:11.579213       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:29:11.703060       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.109.94"}
	I1026 09:29:11.729885       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.238.156"}
	I1026 09:29:13.780791       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:29:13.873927       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:29:14.123524       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:29:14.277571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea] <==
	I1026 09:29:13.751967       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:29:13.760434       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:29:13.763696       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 09:29:13.770145       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:29:13.770290       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:29:13.770464       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:29:13.770529       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:29:13.771588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:29:13.771616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:29:13.771622       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:29:13.771673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:29:13.771720       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 09:29:13.772661       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 09:29:13.778977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:29:13.779717       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:29:13.781942       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:29:13.785017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:29:13.785091       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:29:13.788616       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:29:13.791699       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:29:13.796514       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:29:13.799005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:29:13.805276       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:29:13.815688       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:29:13.839606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709] <==
	I1026 09:29:11.334535       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:29:11.553658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:29:11.664751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:29:11.664795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:29:11.664894       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:29:11.727972       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:29:11.728092       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:29:11.745156       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:29:11.745646       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:29:11.745819       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:29:11.747267       1 config.go:200] "Starting service config controller"
	I1026 09:29:11.747408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:29:11.747463       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:29:11.747492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:29:11.747528       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:29:11.747556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:29:11.748162       1 config.go:309] "Starting node config controller"
	I1026 09:29:11.750580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:29:11.750636       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:29:11.848298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:29:11.848334       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:29:11.848378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c] <==
	I1026 09:29:06.474139       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:29:09.429166       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 09:29:09.429325       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 09:29:09.429365       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:29:09.429396       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:29:09.908235       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:29:09.908331       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:29:09.949819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:29:09.955028       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:29:09.955060       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:29:09.955091       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:29:10.055743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.366619     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.394290     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.768435     726 apiserver.go:52] "Watching apiserver"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.893976     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: E1026 09:29:09.933907     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-596581\" already exists" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: E1026 09:29:09.934282     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-596581\" already exists" pod="kube-system/kube-scheduler-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.934302     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984348     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-xtables-lock\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984433     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-cni-cfg\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984454     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-lib-modules\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984470     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-xtables-lock\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984488     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-lib-modules\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.985816     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.986181     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.986291     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.987684     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.117233     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-596581\" already exists" pod="kube-system/etcd-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.117278     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.239776     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.315216     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-596581\" already exists" pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.316505     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.413888     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-596581\" already exists" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-596581 -n newest-cni-596581: exit status 2 (343.96277ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-596581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2: exit status 1 (81.256296ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ls7nq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rcl8c" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9mbn2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-596581
helpers_test.go:243: (dbg) docker inspect newest-cni-596581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	        "Created": "2025-10-26T09:28:08.038286143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515612,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T09:28:52.174371128Z",
	            "FinishedAt": "2025-10-26T09:28:51.081523548Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hostname",
	        "HostsPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/hosts",
	        "LogPath": "/var/lib/docker/containers/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81/d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81-json.log",
	        "Name": "/newest-cni-596581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-596581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-596581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d784789bf46eabce25c484f52cfa2f42b20eccb3eef041622028b858a2862f81",
	                "LowerDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb-init/diff:/var/lib/docker/overlay2/c15967f0211df7addb4c87566ba6050e9e6b4c7fa4419ad25f6fff0f34dec7cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8d7e792e4f974ea2b927a2819a4ed2841a2098de8e032928f739228bf3f94eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-596581",
	                "Source": "/var/lib/docker/volumes/newest-cni-596581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-596581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-596581",
	                "name.minikube.sigs.k8s.io": "newest-cni-596581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15a851535a956bfb4c8ffea215e9d580efb9210e2c86a00d1d570eea8cde14b3",
	            "SandboxKey": "/var/run/docker/netns/15a851535a95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-596581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:cd:31:1e:83:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3579436470ca4b2c8964527b6b8432c0aea2af9e0a0728e90452b5864afaf1c5",
	                    "EndpointID": "a2d9eb3ac50c9e1e0978d5758f69330aaba5c494f8bc4baba5d32098a6e9261b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-596581",
	                        "d784789bf46e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581: exit status 2 (353.442222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-596581 logs -n 25: (1.11013056s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:25 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-204381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p embed-certs-204381 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:26 UTC │
	│ start   │ -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-491604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │                     │
	│ stop    │ -p no-preload-491604 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:26 UTC │ 26 Oct 25 09:27 UTC │
	│ addons  │ enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ start   │ -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ embed-certs-204381 image list --format=json                                                                                                                                                                                                   │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:27 UTC │
	│ pause   │ -p embed-certs-204381 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │                     │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:27 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p embed-certs-204381                                                                                                                                                                                                                         │ embed-certs-204381 │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ image   │ no-preload-491604 image list --format=json                                                                                                                                                                                                    │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ pause   │ -p no-preload-491604 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ delete  │ -p no-preload-491604                                                                                                                                                                                                                          │ no-preload-491604  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p auto-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-796399        │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-596581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │                     │
	│ stop    │ -p newest-cni-596581 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-596581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:28 UTC │
	│ start   │ -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:28 UTC │ 26 Oct 25 09:29 UTC │
	│ image   │ newest-cni-596581 image list --format=json                                                                                                                                                                                                    │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:29 UTC │ 26 Oct 25 09:29 UTC │
	│ pause   │ -p newest-cni-596581 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-596581  │ jenkins │ v1.37.0 │ 26 Oct 25 09:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 09:28:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 09:28:51.804316  515472 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:28:51.804993  515472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:51.805039  515472 out.go:374] Setting ErrFile to fd 2...
	I1026 09:28:51.805064  515472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:28:51.805407  515472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:28:51.805892  515472 out.go:368] Setting JSON to false
	I1026 09:28:51.807039  515472 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11482,"bootTime":1761459450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:28:51.807142  515472 start.go:141] virtualization:  
	I1026 09:28:51.812235  515472 out.go:179] * [newest-cni-596581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:28:51.815579  515472 notify.go:220] Checking for updates...
	I1026 09:28:51.815551  515472 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:28:51.819268  515472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:28:51.822765  515472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:28:51.825776  515472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:28:51.828717  515472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:28:51.832202  515472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:28:51.835617  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:51.836203  515472 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:28:51.881598  515472 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:28:51.882008  515472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:51.980835  515472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:51.966330517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:51.980933  515472 docker.go:318] overlay module found
	I1026 09:28:51.984025  515472 out.go:179] * Using the docker driver based on existing profile
	I1026 09:28:51.986984  515472 start.go:305] selected driver: docker
	I1026 09:28:51.987001  515472 start.go:925] validating driver "docker" against &{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:51.987110  515472 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:28:51.987789  515472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:28:52.085529  515472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 09:28:52.069707241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:28:52.085880  515472 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:28:52.085917  515472 cni.go:84] Creating CNI manager for ""
	I1026 09:28:52.085981  515472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:28:52.086020  515472 start.go:349] cluster config:
	{Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:28:52.089161  515472 out.go:179] * Starting "newest-cni-596581" primary control-plane node in "newest-cni-596581" cluster
	I1026 09:28:52.092105  515472 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 09:28:52.095108  515472 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 09:28:52.097841  515472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:52.097938  515472 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 09:28:52.097977  515472 cache.go:58] Caching tarball of preloaded images
	I1026 09:28:52.098074  515472 preload.go:233] Found /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 09:28:52.098090  515472 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 09:28:52.098212  515472 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:52.098470  515472 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 09:28:52.118399  515472 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 09:28:52.118423  515472 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 09:28:52.118452  515472 cache.go:232] Successfully downloaded all kic artifacts
	I1026 09:28:52.118481  515472 start.go:360] acquireMachinesLock for newest-cni-596581: {Name:mk457b41350c6ab0aead81b63943ef6522def4bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 09:28:52.118550  515472 start.go:364] duration metric: took 42.922µs to acquireMachinesLock for "newest-cni-596581"
	I1026 09:28:52.118577  515472 start.go:96] Skipping create...Using existing machine configuration
	I1026 09:28:52.118586  515472 fix.go:54] fixHost starting: 
	I1026 09:28:52.118904  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:52.140485  515472 fix.go:112] recreateIfNeeded on newest-cni-596581: state=Stopped err=<nil>
	W1026 09:28:52.140516  515472 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 09:28:49.336286  512470 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 09:28:49.729808  512470 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 09:28:50.131608  512470 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 09:28:50.132176  512470 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 09:28:50.681721  512470 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 09:28:51.027188  512470 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 09:28:51.373668  512470 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 09:28:51.981814  512470 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 09:28:52.156964  512470 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 09:28:52.158756  512470 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 09:28:52.169184  512470 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 09:28:52.172722  512470 out.go:252]   - Booting up control plane ...
	I1026 09:28:52.172832  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 09:28:52.172919  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 09:28:52.174089  512470 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 09:28:52.198616  512470 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 09:28:52.198758  512470 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 09:28:52.207542  512470 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 09:28:52.207647  512470 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 09:28:52.207842  512470 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 09:28:52.397263  512470 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 09:28:52.397395  512470 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 09:28:52.899921  512470 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 504.141333ms
	I1026 09:28:52.906583  512470 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 09:28:52.906684  512470 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 09:28:52.907014  512470 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 09:28:52.907103  512470 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 09:28:52.143745  515472 out.go:252] * Restarting existing docker container for "newest-cni-596581" ...
	I1026 09:28:52.143859  515472 cli_runner.go:164] Run: docker start newest-cni-596581
	I1026 09:28:52.511268  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:28:52.546254  515472 kic.go:430] container "newest-cni-596581" state is running.
	I1026 09:28:52.546639  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:52.574737  515472 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/config.json ...
	I1026 09:28:52.574956  515472 machine.go:93] provisionDockerMachine start ...
	I1026 09:28:52.575020  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:52.595714  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:52.596037  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:52.596047  515472 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 09:28:52.596745  515472 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52048->127.0.0.1:33470: read: connection reset by peer
	I1026 09:28:55.795215  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:55.795258  515472 ubuntu.go:182] provisioning hostname "newest-cni-596581"
	I1026 09:28:55.795373  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:55.816837  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:55.817132  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:55.817144  515472 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-596581 && echo "newest-cni-596581" | sudo tee /etc/hostname
	I1026 09:28:56.013757  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-596581
	
	I1026 09:28:56.013929  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.043241  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:56.043555  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:56.043578  515472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-596581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-596581/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-596581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 09:28:56.224267  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 09:28:56.224294  515472 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-293616/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-293616/.minikube}
	I1026 09:28:56.224348  515472 ubuntu.go:190] setting up certificates
	I1026 09:28:56.224359  515472 provision.go:84] configureAuth start
	I1026 09:28:56.224462  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:56.252313  515472 provision.go:143] copyHostCerts
	I1026 09:28:56.252380  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem, removing ...
	I1026 09:28:56.252396  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem
	I1026 09:28:56.252470  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/ca.pem (1078 bytes)
	I1026 09:28:56.252570  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem, removing ...
	I1026 09:28:56.252576  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem
	I1026 09:28:56.252600  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/cert.pem (1123 bytes)
	I1026 09:28:56.252686  515472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem, removing ...
	I1026 09:28:56.252691  515472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem
	I1026 09:28:56.252713  515472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-293616/.minikube/key.pem (1679 bytes)
	I1026 09:28:56.252766  515472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem org=jenkins.newest-cni-596581 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-596581]
	I1026 09:28:56.726956  515472 provision.go:177] copyRemoteCerts
	I1026 09:28:56.727072  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 09:28:56.727132  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.744777  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:56.850685  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 09:28:56.875890  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 09:28:56.905265  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 09:28:56.936025  515472 provision.go:87] duration metric: took 711.638488ms to configureAuth
	I1026 09:28:56.936101  515472 ubuntu.go:206] setting minikube options for container-runtime
	I1026 09:28:56.936354  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:28:56.936509  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:56.963225  515472 main.go:141] libmachine: Using SSH client type: native
	I1026 09:28:56.963531  515472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1026 09:28:56.963546  515472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 09:28:57.355674  515472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 09:28:57.355698  515472 machine.go:96] duration metric: took 4.780733213s to provisionDockerMachine
	I1026 09:28:57.355710  515472 start.go:293] postStartSetup for "newest-cni-596581" (driver="docker")
	I1026 09:28:57.355741  515472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 09:28:57.355846  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 09:28:57.355911  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.388431  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.511963  515472 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 09:28:57.515618  515472 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 09:28:57.515654  515472 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 09:28:57.515665  515472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/addons for local assets ...
	I1026 09:28:57.515719  515472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-293616/.minikube/files for local assets ...
	I1026 09:28:57.515801  515472 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem -> 2954752.pem in /etc/ssl/certs
	I1026 09:28:57.515905  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 09:28:57.532040  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:28:57.559583  515472 start.go:296] duration metric: took 203.837391ms for postStartSetup
	I1026 09:28:57.559685  515472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:28:57.559770  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.590891  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.704286  515472 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 09:28:57.712704  515472 fix.go:56] duration metric: took 5.594109766s for fixHost
	I1026 09:28:57.712727  515472 start.go:83] releasing machines lock for "newest-cni-596581", held for 5.594162033s
	I1026 09:28:57.712801  515472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-596581
	I1026 09:28:57.766986  515472 ssh_runner.go:195] Run: cat /version.json
	I1026 09:28:57.767033  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.767271  515472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 09:28:57.767325  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:28:57.805145  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:57.808067  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:28:58.051678  515472 ssh_runner.go:195] Run: systemctl --version
	I1026 09:28:58.059067  515472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 09:28:58.131174  515472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 09:28:58.139982  515472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 09:28:58.140090  515472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 09:28:58.147879  515472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 09:28:58.147903  515472 start.go:495] detecting cgroup driver to use...
	I1026 09:28:58.147963  515472 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 09:28:58.148030  515472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 09:28:58.163267  515472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 09:28:58.177412  515472 docker.go:218] disabling cri-docker service (if available) ...
	I1026 09:28:58.177513  515472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 09:28:58.193836  515472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 09:28:58.210410  515472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 09:28:58.411592  515472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 09:28:58.628760  515472 docker.go:234] disabling docker service ...
	I1026 09:28:58.628860  515472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 09:28:58.650131  515472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 09:28:58.684432  515472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 09:28:58.874764  515472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 09:28:59.106040  515472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 09:28:59.126273  515472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 09:28:59.144353  515472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 09:28:59.144476  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.161395  515472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 09:28:59.161503  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.185362  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.197707  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.216519  515472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 09:28:59.225926  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.235011  515472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.243797  515472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 09:28:59.255908  515472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 09:28:59.267759  515472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 09:28:59.279702  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:28:59.479201  515472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 09:28:59.682445  515472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 09:28:59.682543  515472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 09:28:59.691658  515472 start.go:563] Will wait 60s for crictl version
	I1026 09:28:59.691753  515472 ssh_runner.go:195] Run: which crictl
	I1026 09:28:59.695268  515472 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 09:28:59.765413  515472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 09:28:59.765538  515472 ssh_runner.go:195] Run: crio --version
	I1026 09:28:59.819540  515472 ssh_runner.go:195] Run: crio --version
	I1026 09:28:59.875543  515472 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 09:28:59.879353  515472 cli_runner.go:164] Run: docker network inspect newest-cni-596581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 09:28:59.906381  515472 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 09:28:59.912517  515472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:28:59.926578  515472 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 09:28:59.929485  515472 kubeadm.go:883] updating cluster {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 09:28:59.929647  515472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 09:28:59.929722  515472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:28:59.979242  515472 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:28:59.979270  515472 crio.go:433] Images already preloaded, skipping extraction
	I1026 09:28:59.979330  515472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 09:29:00.017498  515472 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 09:29:00.017604  515472 cache_images.go:85] Images are preloaded, skipping loading
	I1026 09:29:00.017629  515472 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 09:29:00.017788  515472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-596581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 09:29:00.017929  515472 ssh_runner.go:195] Run: crio config
	I1026 09:29:00.125101  515472 cni.go:84] Creating CNI manager for ""
	I1026 09:29:00.125183  515472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:29:00.125222  515472 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 09:29:00.125284  515472 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-596581 NodeName:newest-cni-596581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 09:29:00.125488  515472 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-596581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 09:29:00.125612  515472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 09:29:00.143591  515472 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 09:29:00.143749  515472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 09:29:00.156632  515472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 09:29:00.182247  515472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 09:29:00.208977  515472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 09:29:00.250316  515472 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 09:29:00.258486  515472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 09:29:00.291693  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:00.515934  515472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:00.553661  515472 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581 for IP: 192.168.76.2
	I1026 09:29:00.553737  515472 certs.go:195] generating shared ca certs ...
	I1026 09:29:00.553769  515472 certs.go:227] acquiring lock for ca certs: {Name:mk2ffca17d8442484f59de46b35590082a253fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:00.553937  515472 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key
	I1026 09:29:00.554020  515472 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key
	I1026 09:29:00.554054  515472 certs.go:257] generating profile certs ...
	I1026 09:29:00.554184  515472 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/client.key
	I1026 09:29:00.554302  515472 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key.334b42ff
	I1026 09:29:00.554391  515472 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key
	I1026 09:29:00.554553  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem (1338 bytes)
	W1026 09:29:00.554615  515472 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475_empty.pem, impossibly tiny 0 bytes
	I1026 09:29:00.554640  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 09:29:00.554698  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/ca.pem (1078 bytes)
	I1026 09:29:00.554791  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/cert.pem (1123 bytes)
	I1026 09:29:00.554855  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/certs/key.pem (1679 bytes)
	I1026 09:29:00.554949  515472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem (1708 bytes)
	I1026 09:29:00.560324  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 09:29:00.601812  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 09:29:00.654099  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 09:29:00.689112  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 09:29:00.743800  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 09:29:00.768991  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 09:29:00.789605  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 09:29:00.816062  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/newest-cni-596581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 09:29:00.851865  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/ssl/certs/2954752.pem --> /usr/share/ca-certificates/2954752.pem (1708 bytes)
	I1026 09:29:00.897639  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 09:29:00.941576  515472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-293616/.minikube/certs/295475.pem --> /usr/share/ca-certificates/295475.pem (1338 bytes)
	I1026 09:29:00.977751  515472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 09:29:00.990598  515472 ssh_runner.go:195] Run: openssl version
	I1026 09:29:00.999962  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 09:29:01.010901  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.016297  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 08:13 /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.016419  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 09:29:01.076188  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 09:29:01.091687  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295475.pem && ln -fs /usr/share/ca-certificates/295475.pem /etc/ssl/certs/295475.pem"
	I1026 09:29:01.101879  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.107300  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 08:20 /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.107389  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295475.pem
	I1026 09:29:01.153044  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295475.pem /etc/ssl/certs/51391683.0"
	I1026 09:29:01.164905  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2954752.pem && ln -fs /usr/share/ca-certificates/2954752.pem /etc/ssl/certs/2954752.pem"
	I1026 09:29:01.179404  515472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.184449  515472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 08:20 /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.184558  515472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2954752.pem
	I1026 09:29:01.230661  515472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2954752.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 09:29:01.239574  515472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 09:29:01.244054  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 09:29:01.294677  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 09:29:01.410586  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 09:29:01.563354  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 09:29:01.629306  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 09:29:01.736974  515472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 09:29:01.821284  515472 kubeadm.go:400] StartCluster: {Name:newest-cni-596581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-596581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 09:29:01.821434  515472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 09:29:01.821539  515472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 09:29:01.899261  515472 cri.go:89] found id: "d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea"
	I1026 09:29:01.899324  515472 cri.go:89] found id: "f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8"
	I1026 09:29:01.899354  515472 cri.go:89] found id: "6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93"
	I1026 09:29:01.899374  515472 cri.go:89] found id: "8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c"
	I1026 09:29:01.899395  515472 cri.go:89] found id: ""
	I1026 09:29:01.899479  515472 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 09:29:01.944828  515472 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T09:29:01Z" level=error msg="open /run/runc: no such file or directory"
	I1026 09:29:01.944991  515472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 09:29:01.978627  515472 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 09:29:01.978701  515472 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 09:29:01.978796  515472 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 09:29:02.011613  515472 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 09:29:02.012138  515472 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-596581" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:02.012320  515472 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-293616/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-596581" cluster setting kubeconfig missing "newest-cni-596581" context setting]
	I1026 09:29:02.012684  515472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.014413  515472 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 09:29:02.038291  515472 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 09:29:02.038374  515472 kubeadm.go:601] duration metric: took 59.631248ms to restartPrimaryControlPlane
	I1026 09:29:02.038398  515472 kubeadm.go:402] duration metric: took 217.123859ms to StartCluster
	I1026 09:29:02.038436  515472 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.038521  515472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:02.039244  515472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:02.039509  515472 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:29:02.039854  515472 config.go:182] Loaded profile config "newest-cni-596581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:29:02.039935  515472 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:29:02.040090  515472 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-596581"
	I1026 09:29:02.040150  515472 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-596581"
	W1026 09:29:02.040218  515472 addons.go:247] addon storage-provisioner should already be in state true
	I1026 09:29:02.040262  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.040981  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.041195  515472 addons.go:69] Setting dashboard=true in profile "newest-cni-596581"
	I1026 09:29:02.041235  515472 addons.go:238] Setting addon dashboard=true in "newest-cni-596581"
	W1026 09:29:02.041266  515472 addons.go:247] addon dashboard should already be in state true
	I1026 09:29:02.041304  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.041580  515472 addons.go:69] Setting default-storageclass=true in profile "newest-cni-596581"
	I1026 09:29:02.041663  515472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-596581"
	I1026 09:29:02.041937  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.042340  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.047867  515472 out.go:179] * Verifying Kubernetes components...
	I1026 09:29:02.050975  515472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:02.106837  515472 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:29:02.111667  515472 addons.go:238] Setting addon default-storageclass=true in "newest-cni-596581"
	W1026 09:29:02.111705  515472 addons.go:247] addon default-storageclass should already be in state true
	I1026 09:29:02.111739  515472 host.go:66] Checking if "newest-cni-596581" exists ...
	I1026 09:29:02.112217  515472 cli_runner.go:164] Run: docker container inspect newest-cni-596581 --format={{.State.Status}}
	I1026 09:29:02.115316  515472 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:02.115336  515472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:29:02.115395  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.125910  515472 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 09:29:02.130480  515472 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 09:28:59.727285  512470 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.819975995s
	I1026 09:29:01.406831  512470 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.495820615s
	I1026 09:29:02.908807  512470 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001738263s
	I1026 09:29:02.959727  512470 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 09:29:03.049222  512470 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 09:29:03.096320  512470 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 09:29:03.097011  512470 kubeadm.go:318] [mark-control-plane] Marking the node auto-796399 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 09:29:03.127223  512470 kubeadm.go:318] [bootstrap-token] Using token: sd9xx6.ik4a8kq1wzxesora
	I1026 09:29:03.131012  512470 out.go:252]   - Configuring RBAC rules ...
	I1026 09:29:03.131166  512470 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 09:29:03.149519  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 09:29:03.168322  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 09:29:03.179360  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 09:29:03.187729  512470 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 09:29:03.199130  512470 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 09:29:03.320313  512470 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 09:29:03.959116  512470 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 09:29:04.340806  512470 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 09:29:04.341926  512470 kubeadm.go:318] 
	I1026 09:29:04.342030  512470 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 09:29:04.342037  512470 kubeadm.go:318] 
	I1026 09:29:04.342126  512470 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 09:29:04.342131  512470 kubeadm.go:318] 
	I1026 09:29:04.342170  512470 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 09:29:04.342238  512470 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 09:29:04.342296  512470 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 09:29:04.342301  512470 kubeadm.go:318] 
	I1026 09:29:04.342367  512470 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 09:29:04.342372  512470 kubeadm.go:318] 
	I1026 09:29:04.342429  512470 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 09:29:04.342439  512470 kubeadm.go:318] 
	I1026 09:29:04.342500  512470 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 09:29:04.342591  512470 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 09:29:04.342673  512470 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 09:29:04.342684  512470 kubeadm.go:318] 
	I1026 09:29:04.347290  512470 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 09:29:04.347398  512470 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 09:29:04.347404  512470 kubeadm.go:318] 
	I1026 09:29:04.347502  512470 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sd9xx6.ik4a8kq1wzxesora \
	I1026 09:29:04.347628  512470 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 \
	I1026 09:29:04.347656  512470 kubeadm.go:318] 	--control-plane 
	I1026 09:29:04.347664  512470 kubeadm.go:318] 
	I1026 09:29:04.347764  512470 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 09:29:04.347769  512470 kubeadm.go:318] 
	I1026 09:29:04.347867  512470 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sd9xx6.ik4a8kq1wzxesora \
	I1026 09:29:04.347987  512470 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:89fea3d4026e7fe36763ad1de7bbe436bc679550dfd12b197342bd11782d1127 
	I1026 09:29:04.356245  512470 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 09:29:04.356709  512470 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 09:29:04.356943  512470 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 09:29:04.357003  512470 cni.go:84] Creating CNI manager for ""
	I1026 09:29:04.357034  512470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 09:29:04.362241  512470 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 09:29:02.133325  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 09:29:02.133369  515472 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 09:29:02.133456  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.158873  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.191030  515472 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:02.191051  515472 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:29:02.191045  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.191119  515472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-596581
	I1026 09:29:02.220701  515472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/newest-cni-596581/id_rsa Username:docker}
	I1026 09:29:02.585853  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 09:29:02.585917  515472 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 09:29:02.593490  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:02.647077  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 09:29:02.647144  515472 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 09:29:02.694831  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 09:29:02.694902  515472 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 09:29:02.708428  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:02.811654  515472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:02.822411  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 09:29:02.822484  515472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 09:29:02.886069  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 09:29:02.886143  515472 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 09:29:03.024568  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 09:29:03.024643  515472 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 09:29:03.120997  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 09:29:03.121020  515472 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 09:29:03.235878  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 09:29:03.235910  515472 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 09:29:03.271781  515472 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:29:03.271807  515472 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 09:29:03.307986  515472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 09:29:04.365172  512470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 09:29:04.379866  512470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 09:29:04.379898  512470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 09:29:04.435236  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 09:29:05.201671  512470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 09:29:05.201808  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:05.201879  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-796399 minikube.k8s.io/updated_at=2025_10_26T09_29_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=auto-796399 minikube.k8s.io/primary=true
	I1026 09:29:05.612682  512470 ops.go:34] apiserver oom_adj: -16
	I1026 09:29:05.612812  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:06.113871  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:06.613368  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:07.113446  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:07.612887  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:08.113437  512470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 09:29:08.383555  512470 kubeadm.go:1113] duration metric: took 3.181790393s to wait for elevateKubeSystemPrivileges
	I1026 09:29:08.383587  512470 kubeadm.go:402] duration metric: took 23.218859979s to StartCluster
	I1026 09:29:08.383605  512470 settings.go:142] acquiring lock: {Name:mk255cafbe646fc402e5468b85b382bbb9baadf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:08.383671  512470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:29:08.384638  512470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/kubeconfig: {Name:mkbdf1f3cd53a25aff3e66a319db1a3916615501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 09:29:08.384863  512470 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 09:29:08.384955  512470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 09:29:08.385215  512470 config.go:182] Loaded profile config "auto-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:29:08.385228  512470 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 09:29:08.385311  512470 addons.go:69] Setting storage-provisioner=true in profile "auto-796399"
	I1026 09:29:08.385328  512470 addons.go:238] Setting addon storage-provisioner=true in "auto-796399"
	I1026 09:29:08.385354  512470 host.go:66] Checking if "auto-796399" exists ...
	I1026 09:29:08.385860  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.386034  512470 addons.go:69] Setting default-storageclass=true in profile "auto-796399"
	I1026 09:29:08.386051  512470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-796399"
	I1026 09:29:08.386297  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.391013  512470 out.go:179] * Verifying Kubernetes components...
	I1026 09:29:08.394069  512470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 09:29:08.427573  512470 addons.go:238] Setting addon default-storageclass=true in "auto-796399"
	I1026 09:29:08.427613  512470 host.go:66] Checking if "auto-796399" exists ...
	I1026 09:29:08.428064  512470 cli_runner.go:164] Run: docker container inspect auto-796399 --format={{.State.Status}}
	I1026 09:29:08.429411  512470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 09:29:08.432361  512470 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:08.432395  512470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 09:29:08.432473  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:29:08.464665  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:29:08.476448  512470 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:08.476469  512470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 09:29:08.476538  512470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-796399
	I1026 09:29:08.502361  512470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/auto-796399/id_rsa Username:docker}
	I1026 09:29:08.988110  512470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 09:29:08.995536  512470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 09:29:09.243186  512470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 09:29:09.243436  512470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 09:29:10.828981  512470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.840768429s)
	I1026 09:29:10.829108  512470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.833507123s)
	I1026 09:29:10.829177  512470 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.585924698s)
	I1026 09:29:10.830466  512470 node_ready.go:35] waiting up to 15m0s for node "auto-796399" to be "Ready" ...
	I1026 09:29:10.829186  512470 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.585660029s)
	I1026 09:29:10.830864  512470 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 09:29:10.903179  512470 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 09:29:11.904126  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.310563856s)
	I1026 09:29:11.904241  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.195733467s)
	I1026 09:29:11.904271  515472 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.09254511s)
	I1026 09:29:11.904568  515472 api_server.go:52] waiting for apiserver process to appear ...
	I1026 09:29:11.904625  515472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:29:11.904369  515472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.596338697s)
	I1026 09:29:11.907648  515472 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-596581 addons enable metrics-server
	
	I1026 09:29:11.923741  515472 api_server.go:72] duration metric: took 9.884162892s to wait for apiserver process to appear ...
	I1026 09:29:11.923766  515472 api_server.go:88] waiting for apiserver healthz status ...
	I1026 09:29:11.923787  515472 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 09:29:11.937522  515472 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 09:29:11.939190  515472 api_server.go:141] control plane version: v1.34.1
	I1026 09:29:11.939223  515472 api_server.go:131] duration metric: took 15.449672ms to wait for apiserver health ...
	I1026 09:29:11.939233  515472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 09:29:11.943630  515472 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 09:29:11.946013  515472 system_pods.go:59] 8 kube-system pods found
	I1026 09:29:11.946050  515472 system_pods.go:61] "coredns-66bc5c9577-ls7nq" [62473023-ba0e-4958-991d-1a2cde76799e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:29:11.946062  515472 system_pods.go:61] "etcd-newest-cni-596581" [264f14d5-6146-4dcb-9f23-d72280bb5ea2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 09:29:11.946068  515472 system_pods.go:61] "kindnet-2j87q" [8ac4e2f0-aa4a-4f25-9328-aefbce3cde40] Running
	I1026 09:29:11.946076  515472 system_pods.go:61] "kube-apiserver-newest-cni-596581" [cbc6496d-a07d-4174-a276-5e1829b8b8b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 09:29:11.946083  515472 system_pods.go:61] "kube-controller-manager-newest-cni-596581" [23925cb0-7d94-4d90-8550-de65406a9bc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 09:29:11.946088  515472 system_pods.go:61] "kube-proxy-72xqz" [bbd599f1-02d6-4a30-b5d6-a2d81d11c10e] Running
	I1026 09:29:11.946094  515472 system_pods.go:61] "kube-scheduler-newest-cni-596581" [644365d1-94e6-4b78-84e9-fae9ef2bfb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 09:29:11.946099  515472 system_pods.go:61] "storage-provisioner" [f949f69f-a15f-4d9d-b1b7-5f29bed135bf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 09:29:11.946106  515472 system_pods.go:74] duration metric: took 6.867158ms to wait for pod list to return data ...
	I1026 09:29:11.946115  515472 default_sa.go:34] waiting for default service account to be created ...
	I1026 09:29:11.947290  515472 addons.go:514] duration metric: took 9.907352521s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 09:29:11.948867  515472 default_sa.go:45] found service account: "default"
	I1026 09:29:11.948886  515472 default_sa.go:55] duration metric: took 2.766157ms for default service account to be created ...
	I1026 09:29:11.948897  515472 kubeadm.go:586] duration metric: took 9.909325635s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 09:29:11.948912  515472 node_conditions.go:102] verifying NodePressure condition ...
	I1026 09:29:11.951474  515472 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 09:29:11.951499  515472 node_conditions.go:123] node cpu capacity is 2
	I1026 09:29:11.951511  515472 node_conditions.go:105] duration metric: took 2.594192ms to run NodePressure ...
	I1026 09:29:11.951523  515472 start.go:241] waiting for startup goroutines ...
	I1026 09:29:11.951530  515472 start.go:246] waiting for cluster config update ...
	I1026 09:29:11.951541  515472 start.go:255] writing updated cluster config ...
	I1026 09:29:11.951837  515472 ssh_runner.go:195] Run: rm -f paused
	I1026 09:29:12.030461  515472 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 09:29:12.033770  515472 out.go:179] * Done! kubectl is now configured to use "newest-cni-596581" cluster and "default" namespace by default
	I1026 09:29:10.906127  512470 addons.go:514] duration metric: took 2.520895084s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 09:29:11.341664  512470 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-796399" context rescaled to 1 replicas
	W1026 09:29:12.833758  512470 node_ready.go:57] node "auto-796399" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.445691845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.455838571Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4d8e2e92-a904-42e2-940d-c9242b1654e5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.469323449Z" level=info msg="Ran pod sandbox 991bf2acf82be1c43a57d787cd7d567a8e4bd5796b6001eb90562fe546169249 with infra container: kube-system/kindnet-2j87q/POD" id=4d8e2e92-a904-42e2-940d-c9242b1654e5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.47038967Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-72xqz/POD" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.470507768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.485448494Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=43430853-ef9d-4ba1-943e-f6a18adb81b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.49205439Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.50293829Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0ebcb1fc-0bc7-422e-bf52-5983c992d2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.523434461Z" level=info msg="Creating container: kube-system/kindnet-2j87q/kindnet-cni" id=d48cf4ae-0b75-471b-b4c9-9a6ebc32c952 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.523894996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.539363409Z" level=info msg="Ran pod sandbox 09a691a853d9df7ec0194a5646de262c730b02db1275e7ea99225c60d2d4b4d5 with infra container: kube-system/kube-proxy-72xqz/POD" id=f13b6d00-5eab-42e7-9518-50225361652f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.594967019Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6b4a872d-ca23-4f5b-906c-cb614202d7ef name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.606406078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.607789464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.612215024Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=afb691e0-3acf-4892-b64b-51cd65c48a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.628365529Z" level=info msg="Creating container: kube-system/kube-proxy-72xqz/kube-proxy" id=213bf576-ef0c-4d8f-b923-9b70d79a11aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.628502573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.63616893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.657460155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.715936878Z" level=info msg="Created container a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d: kube-system/kindnet-2j87q/kindnet-cni" id=d48cf4ae-0b75-471b-b4c9-9a6ebc32c952 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.723390834Z" level=info msg="Starting container: a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d" id=503727c9-89aa-46b9-a57b-3e366b645158 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.727789661Z" level=info msg="Started container" PID=1057 containerID=a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d description=kube-system/kindnet-2j87q/kindnet-cni id=503727c9-89aa-46b9-a57b-3e366b645158 name=/runtime.v1.RuntimeService/StartContainer sandboxID=991bf2acf82be1c43a57d787cd7d567a8e4bd5796b6001eb90562fe546169249
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.746468511Z" level=info msg="Created container 3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709: kube-system/kube-proxy-72xqz/kube-proxy" id=213bf576-ef0c-4d8f-b923-9b70d79a11aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.755640209Z" level=info msg="Starting container: 3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709" id=3eb9480a-5f8a-4710-a322-f3a92fc35d9c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 09:29:10 newest-cni-596581 crio[612]: time="2025-10-26T09:29:10.765379701Z" level=info msg="Started container" PID=1060 containerID=3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709 description=kube-system/kube-proxy-72xqz/kube-proxy id=3eb9480a-5f8a-4710-a322-f3a92fc35d9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=09a691a853d9df7ec0194a5646de262c730b02db1275e7ea99225c60d2d4b4d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b64b58362f0b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   09a691a853d9d       kube-proxy-72xqz                            kube-system
	a0f17c416b5a1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   991bf2acf82be       kindnet-2j87q                               kube-system
	d16d844e0356c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   8b848a101bb09       kube-controller-manager-newest-cni-596581   kube-system
	f395de9cd02d7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   30cb4995906e6       kube-apiserver-newest-cni-596581            kube-system
	6d54c056352f2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   34448a782213a       etcd-newest-cni-596581                      kube-system
	8e7b50e63ed8f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   3ba42f3d100ef       kube-scheduler-newest-cni-596581            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-596581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-596581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-596581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T09_28_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 09:28:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-596581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 09:29:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 09:29:09 +0000   Sun, 26 Oct 2025 09:28:30 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-596581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                93ba9819-a70b-44ae-b5e4-6adc0588dffe
	  Boot ID:                    b00df6dd-fa4b-415c-89a8-3c8e115556cb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-596581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-2j87q                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-596581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-596581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-72xqz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-596581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 50s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 50s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     39s                kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s                kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s                kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-596581 event: Registered Node newest-cni-596581 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 18s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 18s)  kubelet          Node newest-cni-596581 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 18s)  kubelet          Node newest-cni-596581 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-596581 event: Registered Node newest-cni-596581 in Controller
	
	
	==> dmesg <==
	[Oct26 09:05] overlayfs: idmapped layers are currently not supported
	[ +26.703198] overlayfs: idmapped layers are currently not supported
	[Oct26 09:06] overlayfs: idmapped layers are currently not supported
	[Oct26 09:07] overlayfs: idmapped layers are currently not supported
	[Oct26 09:08] overlayfs: idmapped layers are currently not supported
	[Oct26 09:09] overlayfs: idmapped layers are currently not supported
	[Oct26 09:11] overlayfs: idmapped layers are currently not supported
	[Oct26 09:12] overlayfs: idmapped layers are currently not supported
	[Oct26 09:13] overlayfs: idmapped layers are currently not supported
	[Oct26 09:15] overlayfs: idmapped layers are currently not supported
	[Oct26 09:17] overlayfs: idmapped layers are currently not supported
	[Oct26 09:18] overlayfs: idmapped layers are currently not supported
	[ +38.574344] overlayfs: idmapped layers are currently not supported
	[Oct26 09:22] overlayfs: idmapped layers are currently not supported
	[ +42.981389] overlayfs: idmapped layers are currently not supported
	[ +10.168203] overlayfs: idmapped layers are currently not supported
	[Oct26 09:24] overlayfs: idmapped layers are currently not supported
	[ +28.515669] overlayfs: idmapped layers are currently not supported
	[Oct26 09:25] overlayfs: idmapped layers are currently not supported
	[ +19.906685] overlayfs: idmapped layers are currently not supported
	[Oct26 09:27] overlayfs: idmapped layers are currently not supported
	[ +20.253625] overlayfs: idmapped layers are currently not supported
	[Oct26 09:28] overlayfs: idmapped layers are currently not supported
	[ +24.446098] overlayfs: idmapped layers are currently not supported
	[Oct26 09:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d54c056352f2369d1191f0905b039cb441d61e05acaf2f4c4047397138dfa93] <==
	{"level":"warn","ts":"2025-10-26T09:29:06.279243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.308264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.338381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.355384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.374511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.394649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.414966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.434869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.499255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.542835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.571595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.598507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.655753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.766871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.797201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.825973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.845179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.919938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.950365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:06.973325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.027135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.081014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.147570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.216570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T09:29:07.357335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:29:18 up  3:11,  0 user,  load average: 7.53, 4.47, 3.37
	Linux newest-cni-596581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0f17c416b5a11fc96fbfb7b99683380eec76a14613195fb799ae85c7aef1b7d] <==
	I1026 09:29:10.846285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 09:29:10.846541       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 09:29:10.846648       1 main.go:148] setting mtu 1500 for CNI 
	I1026 09:29:10.846659       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 09:29:10.846671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T09:29:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 09:29:11.052184       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 09:29:11.052204       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 09:29:11.052213       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 09:29:11.052508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f395de9cd02d7e42a8712630d7520535e9fff30a312dca9a981d99b9d8d20ce8] <==
	I1026 09:29:09.799390       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 09:29:09.799457       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 09:29:09.799497       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 09:29:09.777140       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 09:29:09.800076       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 09:29:09.823128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 09:29:09.824179       1 aggregator.go:171] initial CRD sync complete...
	I1026 09:29:09.824194       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 09:29:09.824201       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 09:29:09.824208       1 cache.go:39] Caches are synced for autoregister controller
	I1026 09:29:09.826506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 09:29:09.842453       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1026 09:29:09.915095       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 09:29:10.157304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 09:29:10.264598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 09:29:11.421787       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 09:29:11.513626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 09:29:11.560767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 09:29:11.579213       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 09:29:11.703060       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.109.94"}
	I1026 09:29:11.729885       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.238.156"}
	I1026 09:29:13.780791       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 09:29:13.873927       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 09:29:14.123524       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 09:29:14.277571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d16d844e0356c4659210b03b690863cfec5feff1f4c2043f261f501f9dab16ea] <==
	I1026 09:29:13.751967       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 09:29:13.760434       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 09:29:13.763696       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 09:29:13.770145       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 09:29:13.770290       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 09:29:13.770464       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 09:29:13.770529       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 09:29:13.771588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:29:13.771616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 09:29:13.771622       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 09:29:13.771673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 09:29:13.771720       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 09:29:13.772661       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 09:29:13.778977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 09:29:13.779717       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 09:29:13.781942       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 09:29:13.785017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 09:29:13.785091       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 09:29:13.788616       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 09:29:13.791699       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 09:29:13.796514       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 09:29:13.799005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 09:29:13.805276       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 09:29:13.815688       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 09:29:13.839606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [3b64b58362f0be1b1d5010111c3de2d3e54a13c23cc4b42fb955d107a265f709] <==
	I1026 09:29:11.334535       1 server_linux.go:53] "Using iptables proxy"
	I1026 09:29:11.553658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 09:29:11.664751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 09:29:11.664795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 09:29:11.664894       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 09:29:11.727972       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 09:29:11.728092       1 server_linux.go:132] "Using iptables Proxier"
	I1026 09:29:11.745156       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 09:29:11.745646       1 server.go:527] "Version info" version="v1.34.1"
	I1026 09:29:11.745819       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:29:11.747267       1 config.go:200] "Starting service config controller"
	I1026 09:29:11.747408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 09:29:11.747463       1 config.go:106] "Starting endpoint slice config controller"
	I1026 09:29:11.747492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 09:29:11.747528       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 09:29:11.747556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 09:29:11.748162       1 config.go:309] "Starting node config controller"
	I1026 09:29:11.750580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 09:29:11.750636       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 09:29:11.848298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 09:29:11.848334       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 09:29:11.848378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e7b50e63ed8f5951cbe016ec9bab456f651ec6ca793086afa2a9affacdb204c] <==
	I1026 09:29:06.474139       1 serving.go:386] Generated self-signed cert in-memory
	W1026 09:29:09.429166       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 09:29:09.429325       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 09:29:09.429365       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 09:29:09.429396       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 09:29:09.908235       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 09:29:09.908331       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 09:29:09.949819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 09:29:09.955028       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:29:09.955060       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 09:29:09.955091       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 09:29:10.055743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.366619     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.394290     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.768435     726 apiserver.go:52] "Watching apiserver"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.893976     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: E1026 09:29:09.933907     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-596581\" already exists" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: E1026 09:29:09.934282     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-596581\" already exists" pod="kube-system/kube-scheduler-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.934302     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984348     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-xtables-lock\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984433     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-cni-cfg\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984454     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac4e2f0-aa4a-4f25-9328-aefbce3cde40-lib-modules\") pod \"kindnet-2j87q\" (UID: \"8ac4e2f0-aa4a-4f25-9328-aefbce3cde40\") " pod="kube-system/kindnet-2j87q"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984470     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-xtables-lock\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.984488     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd599f1-02d6-4a30-b5d6-a2d81d11c10e-lib-modules\") pod \"kube-proxy-72xqz\" (UID: \"bbd599f1-02d6-4a30-b5d6-a2d81d11c10e\") " pod="kube-system/kube-proxy-72xqz"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.985816     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.986181     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-596581"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.986291     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 09:29:09 newest-cni-596581 kubelet[726]: I1026 09:29:09.987684     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.117233     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-596581\" already exists" pod="kube-system/etcd-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.117278     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.239776     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.315216     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-596581\" already exists" pod="kube-system/kube-apiserver-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: I1026 09:29:10.316505     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:10 newest-cni-596581 kubelet[726]: E1026 09:29:10.413888     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-596581\" already exists" pod="kube-system/kube-controller-manager-newest-cni-596581"
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 09:29:13 newest-cni-596581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-596581 -n newest-cni-596581
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-596581 -n newest-cni-596581: exit status 2 (364.613043ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-596581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2: exit status 1 (89.866969ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ls7nq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rcl8c" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9mbn2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-596581 describe pod coredns-66bc5c9577-ls7nq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rcl8c kubernetes-dashboard-855c9754f9-9mbn2: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.55s)
E1026 09:34:59.521720  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:35:04.644033  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:35:14.885770  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/auto-796399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.52
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.12
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
27 TestAddons/Setup 171.76
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 9.89
48 TestAddons/StoppedEnableDisable 12.53
49 TestCertOptions 38.6
50 TestCertExpiration 240.36
52 TestForceSystemdFlag 40.89
53 TestForceSystemdEnv 35.08
58 TestErrorSpam/setup 35.95
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 5.83
62 TestErrorSpam/unpause 6.22
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 81.76
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.59
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
75 TestFunctional/serial/CacheCmd/cache/add_local 1.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 79.58
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.52
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 15.21
91 TestFunctional/parallel/DryRun 0.57
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.08
98 TestFunctional/parallel/AddonsCmd 0.24
99 TestFunctional/parallel/PersistentVolumeClaim 25.97
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.38
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 1.97
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.05
130 TestFunctional/parallel/MountCmd/specific-port 2.08
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
132 TestFunctional/parallel/ServiceCmd/List 0.6
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.38
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
144 TestFunctional/parallel/ImageCommands/Setup 0.66
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 209.09
163 TestMultiControlPlane/serial/DeployApp 36.86
164 TestMultiControlPlane/serial/PingHostFromPods 1.51
165 TestMultiControlPlane/serial/AddWorkerNode 59.8
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.72
169 TestMultiControlPlane/serial/StopSecondaryNode 12.92
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 108.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.3
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.01
176 TestMultiControlPlane/serial/StopCluster 36.17
177 TestMultiControlPlane/serial/RestartCluster 87.43
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 79.14
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
184 TestJSONOutput/start/Command 77.59
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.88
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 42.46
210 TestKicCustomNetwork/use_default_bridge_network 33.2
211 TestKicExistingNetwork 38.27
212 TestKicCustomSubnet 36.61
213 TestKicStaticIP 35.71
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 73.53
218 TestMountStart/serial/StartWithMountFirst 9.93
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 6.9
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.74
223 TestMountStart/serial/VerifyMountPostDelete 0.28
224 TestMountStart/serial/Stop 1.29
225 TestMountStart/serial/RestartStopped 8.41
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 138.34
230 TestMultiNode/serial/DeployApp2Nodes 4.81
231 TestMultiNode/serial/PingHostFrom2Pods 0.91
232 TestMultiNode/serial/AddNode 59.36
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.75
235 TestMultiNode/serial/CopyFile 10.57
236 TestMultiNode/serial/StopNode 2.41
237 TestMultiNode/serial/StartAfterStop 7.96
238 TestMultiNode/serial/RestartKeepsNodes 77.23
239 TestMultiNode/serial/DeleteNode 5.73
240 TestMultiNode/serial/StopMultiNode 24.02
241 TestMultiNode/serial/RestartMultiNode 56.88
242 TestMultiNode/serial/ValidateNameConflict 37.78
247 TestPreload 130.47
249 TestScheduledStopUnix 109.87
252 TestInsufficientStorage 13.05
253 TestRunningBinaryUpgrade 50.84
256 TestMissingContainerUpgrade 122.53
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
259 TestNoKubernetes/serial/StartWithK8s 42.22
260 TestNoKubernetes/serial/StartWithStopK8s 8.38
261 TestNoKubernetes/serial/Start 10.71
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
263 TestNoKubernetes/serial/ProfileList 2.87
264 TestNoKubernetes/serial/Stop 1.36
265 TestNoKubernetes/serial/StartNoArgs 7.59
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
267 TestStoppedBinaryUpgrade/Setup 1.54
268 TestStoppedBinaryUpgrade/Upgrade 60.16
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
278 TestPause/serial/Start 80.15
279 TestPause/serial/SecondStartNoReconfiguration 23.95
288 TestNetworkPlugins/group/false 3.84
293 TestStartStop/group/old-k8s-version/serial/FirstStart 73.62
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.45
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
298 TestStartStop/group/old-k8s-version/serial/Stop 12.15
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 47.45
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.48
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.88
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/embed-certs/serial/FirstStart 86.38
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
314 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
317 TestStartStop/group/no-preload/serial/FirstStart 71.12
318 TestStartStop/group/embed-certs/serial/DeployApp 9.42
320 TestStartStop/group/embed-certs/serial/Stop 12.16
321 TestStartStop/group/no-preload/serial/DeployApp 9.34
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 48.66
325 TestStartStop/group/no-preload/serial/Stop 12.41
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
327 TestStartStop/group/no-preload/serial/SecondStart 54.32
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/newest-cni/serial/FirstStart 45.15
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
335 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
336 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
338 TestNetworkPlugins/group/auto/Start 85.02
339 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/Stop 1.49
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
343 TestStartStop/group/newest-cni/serial/SecondStart 20.77
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
348 TestNetworkPlugins/group/kindnet/Start 83.46
349 TestNetworkPlugins/group/auto/KubeletFlags 0.39
350 TestNetworkPlugins/group/auto/NetCatPod 10.38
351 TestNetworkPlugins/group/auto/DNS 0.18
352 TestNetworkPlugins/group/auto/Localhost 0.14
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/calico/Start 68.01
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.41
358 TestNetworkPlugins/group/kindnet/DNS 0.25
359 TestNetworkPlugins/group/kindnet/Localhost 0.19
360 TestNetworkPlugins/group/kindnet/HairPin 0.22
361 TestNetworkPlugins/group/custom-flannel/Start 66.43
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.3
364 TestNetworkPlugins/group/calico/NetCatPod 11.32
365 TestNetworkPlugins/group/calico/DNS 0.21
366 TestNetworkPlugins/group/calico/Localhost 0.16
367 TestNetworkPlugins/group/calico/HairPin 0.16
368 TestNetworkPlugins/group/enable-default-cni/Start 76.62
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.4
371 TestNetworkPlugins/group/custom-flannel/DNS 0.21
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
374 TestNetworkPlugins/group/flannel/Start 63.59
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
380 TestNetworkPlugins/group/bridge/Start 71.98
381 TestNetworkPlugins/group/flannel/ControllerPod 6
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
383 TestNetworkPlugins/group/flannel/NetCatPod 10.33
384 TestNetworkPlugins/group/flannel/DNS 0.22
385 TestNetworkPlugins/group/flannel/Localhost 0.17
386 TestNetworkPlugins/group/flannel/HairPin 0.17
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
388 TestNetworkPlugins/group/bridge/NetCatPod 10.26
389 TestNetworkPlugins/group/bridge/DNS 0.15
390 TestNetworkPlugins/group/bridge/Localhost 0.13
391 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-923578 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-923578 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.259905693s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 08:13:13.912260  295475 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 08:13:13.912340  295475 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-923578
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-923578: exit status 85 (99.761417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-923578 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-923578 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:13:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:13:08.701575  295480 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:13:08.701781  295480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:08.701810  295480 out.go:374] Setting ErrFile to fd 2...
	I1026 08:13:08.701829  295480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:08.702164  295480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	W1026 08:13:08.702428  295480 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-293616/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-293616/.minikube/config/config.json: no such file or directory
	I1026 08:13:08.702961  295480 out.go:368] Setting JSON to true
	I1026 08:13:08.703970  295480 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6939,"bootTime":1761459450,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:13:08.704075  295480 start.go:141] virtualization:  
	I1026 08:13:08.708032  295480 out.go:99] [download-only-923578] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1026 08:13:08.708229  295480 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 08:13:08.708357  295480 notify.go:220] Checking for updates...
	I1026 08:13:08.712313  295480 out.go:171] MINIKUBE_LOCATION=21772
	I1026 08:13:08.715366  295480 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:13:08.718273  295480 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:13:08.721021  295480 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:13:08.723905  295480 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 08:13:08.729832  295480 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 08:13:08.730234  295480 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:13:08.758871  295480 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:13:08.759088  295480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:08.825859  295480 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-26 08:13:08.810418144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:08.825968  295480 docker.go:318] overlay module found
	I1026 08:13:08.829052  295480 out.go:99] Using the docker driver based on user configuration
	I1026 08:13:08.829098  295480 start.go:305] selected driver: docker
	I1026 08:13:08.829105  295480 start.go:925] validating driver "docker" against <nil>
	I1026 08:13:08.829226  295480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:08.888493  295480 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-26 08:13:08.879445927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:08.888680  295480 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:13:08.888976  295480 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1026 08:13:08.889142  295480 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 08:13:08.892263  295480 out.go:171] Using Docker driver with root privileges
	I1026 08:13:08.895218  295480 cni.go:84] Creating CNI manager for ""
	I1026 08:13:08.895292  295480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:13:08.895306  295480 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:13:08.895401  295480 start.go:349] cluster config:
	{Name:download-only-923578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-923578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:13:08.898292  295480 out.go:99] Starting "download-only-923578" primary control-plane node in "download-only-923578" cluster
	I1026 08:13:08.898315  295480 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:13:08.901218  295480 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:13:08.901265  295480 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:13:08.901331  295480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:13:08.918518  295480 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 08:13:08.918783  295480 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 08:13:08.918933  295480 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 08:13:08.965218  295480 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 08:13:08.965245  295480 cache.go:58] Caching tarball of preloaded images
	I1026 08:13:08.965401  295480 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:13:08.968671  295480 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1026 08:13:08.968700  295480 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1026 08:13:09.059651  295480 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1026 08:13:09.059827  295480 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 08:13:12.060105  295480 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 08:13:12.060568  295480 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/download-only-923578/config.json ...
	I1026 08:13:12.060607  295480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/download-only-923578/config.json: {Name:mk4b1ef5eedf9039a09c3af913b77616ea34efc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:13:12.061446  295480 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:13:12.062299  295480 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21772-293616/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-923578 host does not exist
	  To start a cluster, run: "minikube start -p download-only-923578"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-923578
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-431602 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-431602 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.519932557s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 08:13:18.917709  295475 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 08:13:18.917747  295475 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-293616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-431602
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-431602: exit status 85 (94.981237ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-923578 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-923578 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ delete  │ -p download-only-923578                                                                                                                                                   │ download-only-923578 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │ 26 Oct 25 08:13 UTC │
	│ start   │ -o=json --download-only -p download-only-431602 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-431602 │ jenkins │ v1.37.0 │ 26 Oct 25 08:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:13:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:13:14.440336  295675 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:13:14.440515  295675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:14.440548  295675 out.go:374] Setting ErrFile to fd 2...
	I1026 08:13:14.440569  295675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:13:14.440851  295675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:13:14.441300  295675 out.go:368] Setting JSON to true
	I1026 08:13:14.442133  295675 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6945,"bootTime":1761459450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:13:14.442231  295675 start.go:141] virtualization:  
	I1026 08:13:14.445569  295675 out.go:99] [download-only-431602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:13:14.445836  295675 notify.go:220] Checking for updates...
	I1026 08:13:14.448862  295675 out.go:171] MINIKUBE_LOCATION=21772
	I1026 08:13:14.451943  295675 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:13:14.454856  295675 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:13:14.457822  295675 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:13:14.460762  295675 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 08:13:14.466507  295675 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 08:13:14.466831  295675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:13:14.492684  295675 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:13:14.492808  295675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:14.552930  295675 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-26 08:13:14.543462991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:14.553032  295675 docker.go:318] overlay module found
	I1026 08:13:14.555977  295675 out.go:99] Using the docker driver based on user configuration
	I1026 08:13:14.556021  295675 start.go:305] selected driver: docker
	I1026 08:13:14.556029  295675 start.go:925] validating driver "docker" against <nil>
	I1026 08:13:14.556158  295675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:13:14.609740  295675 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-26 08:13:14.601248602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:13:14.609892  295675 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:13:14.610150  295675 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1026 08:13:14.610305  295675 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 08:13:14.613323  295675 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-431602 host does not exist
	  To start a cluster, run: "minikube start -p download-only-431602"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-431602
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 08:13:20.093477  295475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-991071 --alsologtostderr --binary-mirror http://127.0.0.1:45987 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-991071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-991071
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-178002
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-178002: exit status 85 (120.5798ms)

                                                
                                                
-- stdout --
	* Profile "addons-178002" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-178002"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-178002
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-178002: exit status 85 (110.430395ms)

                                                
                                                
-- stdout --
	* Profile "addons-178002" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-178002"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (171.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-178002 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-178002 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m51.761326311s)
--- PASS: TestAddons/Setup (171.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-178002 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-178002 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-178002 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-178002 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [73b86850-50e4-406d-ba5a-cbf3c70b1a29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [73b86850-50e4-406d-ba5a-cbf3c70b1a29] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004271609s
addons_test.go:694: (dbg) Run:  kubectl --context addons-178002 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-178002 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-178002 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-178002 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-178002
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-178002: (12.254237334s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-178002
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-178002
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-178002
--- PASS: TestAddons/StoppedEnableDisable (12.53s)

                                                
                                    
x
+
TestCertOptions (38.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-094384 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.649316516s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-094384 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-094384 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-094384 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-094384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-094384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-094384: (2.237849129s)
--- PASS: TestCertOptions (38.60s)

                                                
                                    
x
+
TestCertExpiration (240.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1026 09:18:59.192632  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375355 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.764660388s)
E1026 09:21:13.918755  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375355 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.870337182s)
helpers_test.go:175: Cleaning up "cert-expiration-375355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-375355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-375355: (2.727899363s)
--- PASS: TestCertExpiration (240.36s)

                                                
                                    
x
+
TestForceSystemdFlag (40.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-709359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-709359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.002218268s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-709359 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-709359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-709359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-709359: (2.585451931s)
--- PASS: TestForceSystemdFlag (40.89s)

                                                
                                    
x
+
TestForceSystemdEnv (35.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-003748 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.47505512s)
helpers_test.go:175: Cleaning up "force-systemd-env-003748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-003748
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-003748: (2.601021542s)
--- PASS: TestForceSystemdEnv (35.08s)

                                                
                                    
x
+
TestErrorSpam/setup (35.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-541332 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-541332 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-541332 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-541332 --driver=docker  --container-runtime=crio: (35.951078529s)
--- PASS: TestErrorSpam/setup (35.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (5.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause: exit status 80 (1.704128529s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause: exit status 80 (1.668615512s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause: exit status 80 (2.450924956s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.22s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause: exit status 80 (2.041556435s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause: exit status 80 (1.862182519s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause: exit status 80 (2.316744953s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.22s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 stop: (1.317979476s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541332 --log_dir /tmp/nospam-541332 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-293616/.minikube/files/etc/test/nested/copy/295475/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1026 08:21:13.923209  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:13.929699  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:13.941171  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:13.962620  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:14.004035  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:14.085697  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:14.247163  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:14.568767  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:15.210839  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:16.492220  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:19.053898  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:24.176010  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:34.417814  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:21:54.899230  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-622437 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.760179197s)
--- PASS: TestFunctional/serial/StartWithProxy (81.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 08:21:55.637384  295475 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-622437 --alsologtostderr -v=8: (27.579212604s)
functional_test.go:678: soft start took 27.585227924s for "functional-622437" cluster.
I1026 08:22:23.216899  295475 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-622437 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:3.1: (1.131827341s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:3.3: (1.142302571s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 cache add registry.k8s.io/pause:latest: (1.076718739s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-622437 /tmp/TestFunctionalserialCacheCmdcacheadd_local3858586230/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache add minikube-local-cache-test:functional-622437
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache delete minikube-local-cache-test:functional-622437
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-622437
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.457954ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 kubectl -- --context functional-622437 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-622437 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (79.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 08:22:35.862474  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-622437 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m19.577981842s)
functional_test.go:776: restart took 1m19.578108916s for "functional-622437" cluster.
I1026 08:23:50.143949  295475 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (79.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-622437 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 logs: (1.521964025s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 logs --file /tmp/TestFunctionalserialLogsFileCmd3240018661/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 logs --file /tmp/TestFunctionalserialLogsFileCmd3240018661/001/logs.txt: (1.544324604s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-622437 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-622437
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-622437: exit status 115 (392.600031ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31813 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-622437 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 config get cpus: exit status 14 (84.720796ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 config get cpus
E1026 08:23:57.784031  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 config get cpus: exit status 14 (81.954184ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-622437 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-622437 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 322024: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-622437 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (237.063024ms)

                                                
                                                
-- stdout --
	* [functional-622437] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:34:25.831735  321511 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:34:25.831964  321511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:25.831997  321511 out.go:374] Setting ErrFile to fd 2...
	I1026 08:34:25.832018  321511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:25.832307  321511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:34:25.832714  321511 out.go:368] Setting JSON to false
	I1026 08:34:25.833660  321511 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8216,"bootTime":1761459450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:34:25.833762  321511 start.go:141] virtualization:  
	I1026 08:34:25.836857  321511 out.go:179] * [functional-622437] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 08:34:25.840675  321511 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:34:25.840754  321511 notify.go:220] Checking for updates...
	I1026 08:34:25.846477  321511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:34:25.849413  321511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:34:25.852312  321511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:34:25.855190  321511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:34:25.858185  321511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:34:25.861597  321511 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:34:25.862182  321511 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:34:25.896264  321511 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:34:25.896389  321511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:34:25.986024  321511 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 08:34:25.975455563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:34:25.986287  321511 docker.go:318] overlay module found
	I1026 08:34:25.989562  321511 out.go:179] * Using the docker driver based on existing profile
	I1026 08:34:25.992491  321511 start.go:305] selected driver: docker
	I1026 08:34:25.992509  321511 start.go:925] validating driver "docker" against &{Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:34:25.992605  321511 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:34:25.996252  321511 out.go:203] 
	W1026 08:34:25.999117  321511 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 08:34:26.004952  321511 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622437 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-622437 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (218.411698ms)

                                                
                                                
-- stdout --
	* [functional-622437] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:34:25.630070  321465 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:34:25.630225  321465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:25.630237  321465 out.go:374] Setting ErrFile to fd 2...
	I1026 08:34:25.630242  321465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:34:25.631227  321465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:34:25.631708  321465 out.go:368] Setting JSON to false
	I1026 08:34:25.632667  321465 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8216,"bootTime":1761459450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 08:34:25.632740  321465 start.go:141] virtualization:  
	I1026 08:34:25.638567  321465 out.go:179] * [functional-622437] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1026 08:34:25.641564  321465 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:34:25.641603  321465 notify.go:220] Checking for updates...
	I1026 08:34:25.647518  321465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:34:25.650493  321465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 08:34:25.653386  321465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 08:34:25.656124  321465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 08:34:25.659052  321465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:34:25.662552  321465 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:34:25.663268  321465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:34:25.698487  321465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 08:34:25.698623  321465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:34:25.759481  321465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 08:34:25.749473335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:34:25.759588  321465 docker.go:318] overlay module found
	I1026 08:34:25.762789  321465 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 08:34:25.765618  321465 start.go:305] selected driver: docker
	I1026 08:34:25.765638  321465 start.go:925] validating driver "docker" against &{Name:functional-622437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622437 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:34:25.765754  321465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:34:25.769362  321465 out.go:203] 
	W1026 08:34:25.772308  321465 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 08:34:25.775138  321465 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3323c4ea-96f8-48d6-a2a9-cf34c69c954f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.046100856s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-622437 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-622437 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-622437 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-622437 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [24c456ef-3205-4a9f-9c44-a9416ca40009] Pending
helpers_test.go:352: "sp-pod" [24c456ef-3205-4a9f-9c44-a9416ca40009] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [24c456ef-3205-4a9f-9c44-a9416ca40009] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003312833s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-622437 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-622437 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-622437 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a8ae97bc-e346-48b0-b5dc-9c7546f41dd8] Pending
helpers_test.go:352: "sp-pod" [a8ae97bc-e346-48b0-b5dc-9c7546f41dd8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003431823s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-622437 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh -n functional-622437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cp functional-622437:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2686172398/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh -n functional-622437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh -n functional-622437 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/295475/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /etc/test/nested/copy/295475/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/295475.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /etc/ssl/certs/295475.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/295475.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /usr/share/ca-certificates/295475.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /etc/ssl/certs/51391683.0"
2025/10/26 08:34:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2954752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /etc/ssl/certs/2954752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2954752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /usr/share/ca-certificates/2954752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-622437 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "sudo systemctl is-active docker": exit status 1 (383.395273ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "sudo systemctl is-active containerd": exit status 1 (352.754655ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 318174: os: process already finished
helpers_test.go:519: unable to terminate pid 317950: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-622437 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [801411ff-a3fc-4da4-b23f-1ceae7fd8779] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [801411ff-a3fc-4da4-b23f-1ceae7fd8779] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003422584s
I1026 08:24:07.653571  295475 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-622437 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.237.220 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-622437 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "379.128352ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "64.645123ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "368.975963ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.350115ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdany-port516417523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761467653107764815" to /tmp/TestFunctionalparallelMountCmdany-port516417523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761467653107764815" to /tmp/TestFunctionalparallelMountCmdany-port516417523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761467653107764815" to /tmp/TestFunctionalparallelMountCmdany-port516417523/001/test-1761467653107764815
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.833572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:34:13.456868  295475 retry.go:31] will retry after 615.117031ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 08:34 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 08:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 08:34 test-1761467653107764815
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh cat /mount-9p/test-1761467653107764815
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-622437 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [78b89c27-36ea-440d-ae5c-d5cfeb5a39a7] Pending
helpers_test.go:352: "busybox-mount" [78b89c27-36ea-440d-ae5c-d5cfeb5a39a7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [78b89c27-36ea-440d-ae5c-d5cfeb5a39a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [78b89c27-36ea-440d-ae5c-d5cfeb5a39a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003245168s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-622437 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdany-port516417523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdspecific-port3697891839/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.474359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:34:20.498890  295475 retry.go:31] will retry after 678.692903ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdspecific-port3697891839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "sudo umount -f /mount-9p": exit status 1 (296.824061ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-622437 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdspecific-port3697891839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T" /mount1: exit status 1 (588.882872ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:34:22.829034  295475 retry.go:31] will retry after 723.814629ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-622437 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622437 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2518715903/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 service list -o json
functional_test.go:1504: Took "633.414718ms" to run "out/minikube-linux-arm64 -p functional-622437 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 version -o=json --components: (1.379136374s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622437 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622437 image ls --format short --alsologtostderr:
I1026 08:34:42.448978  324062 out.go:360] Setting OutFile to fd 1 ...
I1026 08:34:42.449124  324062 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:42.449151  324062 out.go:374] Setting ErrFile to fd 2...
I1026 08:34:42.449172  324062 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:42.449524  324062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
I1026 08:34:42.450364  324062 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:42.450549  324062 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:42.451245  324062 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
I1026 08:34:42.470475  324062 ssh_runner.go:195] Run: systemctl --version
I1026 08:34:42.470524  324062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
I1026 08:34:42.492437  324062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
I1026 08:34:42.601426  324062 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622437 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622437 image ls --format table --alsologtostderr:
I1026 08:34:43.400694  324271 out.go:360] Setting OutFile to fd 1 ...
I1026 08:34:43.402123  324271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.402144  324271 out.go:374] Setting ErrFile to fd 2...
I1026 08:34:43.402152  324271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.402876  324271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
I1026 08:34:43.403587  324271 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.403713  324271 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.404173  324271 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
I1026 08:34:43.434692  324271 ssh_runner.go:195] Run: systemctl --version
I1026 08:34:43.434787  324271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
I1026 08:34:43.460123  324271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
I1026 08:34:43.570077  324271 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622437 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{
"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc",
"repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"ba04bb24b95753201135cb
c420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["regis
try.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"e612b971
16b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622437 image ls --format json --alsologtostderr:
I1026 08:34:43.106215  324190 out.go:360] Setting OutFile to fd 1 ...
I1026 08:34:43.106457  324190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.106487  324190 out.go:374] Setting ErrFile to fd 2...
I1026 08:34:43.106506  324190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.106856  324190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
I1026 08:34:43.107539  324190 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.107722  324190 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.108234  324190 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
I1026 08:34:43.136536  324190 ssh_runner.go:195] Run: systemctl --version
I1026 08:34:43.136592  324190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
I1026 08:34:43.160330  324190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
I1026 08:34:43.269667  324190 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622437 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622437 image ls --format yaml --alsologtostderr:
I1026 08:34:42.695617  324103 out.go:360] Setting OutFile to fd 1 ...
I1026 08:34:42.695814  324103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:42.695841  324103 out.go:374] Setting ErrFile to fd 2...
I1026 08:34:42.695860  324103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:42.696159  324103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
I1026 08:34:42.697020  324103 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:42.697207  324103 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:42.698441  324103 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
I1026 08:34:42.715756  324103 ssh_runner.go:195] Run: systemctl --version
I1026 08:34:42.715808  324103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
I1026 08:34:42.738486  324103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
I1026 08:34:42.850909  324103 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622437 ssh pgrep buildkitd: exit status 1 (363.3322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image build -t localhost/my-image:functional-622437 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-622437 image build -t localhost/my-image:functional-622437 testdata/build --alsologtostderr: (3.399343843s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622437 image build -t localhost/my-image:functional-622437 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d11ab06ab4e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-622437
--> c4ab0a7fa9f
Successfully tagged localhost/my-image:functional-622437
c4ab0a7fa9f7f3aa7342a04893ccc2f784af94be3f8e241bf530878a1d7ac1c7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622437 image build -t localhost/my-image:functional-622437 testdata/build --alsologtostderr:
I1026 08:34:43.320948  324259 out.go:360] Setting OutFile to fd 1 ...
I1026 08:34:43.321996  324259 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.322049  324259 out.go:374] Setting ErrFile to fd 2...
I1026 08:34:43.322069  324259 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:34:43.322379  324259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
I1026 08:34:43.323476  324259 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.327075  324259 config.go:182] Loaded profile config "functional-622437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:34:43.334189  324259 cli_runner.go:164] Run: docker container inspect functional-622437 --format={{.State.Status}}
I1026 08:34:43.371854  324259 ssh_runner.go:195] Run: systemctl --version
I1026 08:34:43.371912  324259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622437
I1026 08:34:43.406622  324259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/functional-622437/id_rsa Username:docker}
I1026 08:34:43.514327  324259 build_images.go:161] Building image from path: /tmp/build.1250688528.tar
I1026 08:34:43.514407  324259 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 08:34:43.526169  324259 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1250688528.tar
I1026 08:34:43.530213  324259 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1250688528.tar: stat -c "%s %y" /var/lib/minikube/build/build.1250688528.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1250688528.tar': No such file or directory
I1026 08:34:43.530246  324259 ssh_runner.go:362] scp /tmp/build.1250688528.tar --> /var/lib/minikube/build/build.1250688528.tar (3072 bytes)
I1026 08:34:43.552848  324259 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1250688528
I1026 08:34:43.561479  324259 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1250688528 -xf /var/lib/minikube/build/build.1250688528.tar
I1026 08:34:43.573138  324259 crio.go:315] Building image: /var/lib/minikube/build/build.1250688528
I1026 08:34:43.573208  324259 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-622437 /var/lib/minikube/build/build.1250688528 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1026 08:34:46.624823  324259 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-622437 /var/lib/minikube/build/build.1250688528 --cgroup-manager=cgroupfs: (3.051587087s)
I1026 08:34:46.624894  324259 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1250688528
I1026 08:34:46.632732  324259 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1250688528.tar
I1026 08:34:46.641378  324259 build_images.go:217] Built localhost/my-image:functional-622437 from /tmp/build.1250688528.tar
I1026 08:34:46.641411  324259 build_images.go:133] succeeded building to: functional-622437
I1026 08:34:46.641417  324259 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-622437
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image rm kicbase/echo-server:functional-622437 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-622437 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-622437
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-622437
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-622437
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1026 08:36:13.918878  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:37:36.987946  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m28.245241829s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (36.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 kubectl -- rollout status deployment/busybox: (5.216436413s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:24.510073  295475 retry.go:31] will retry after 705.98955ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:25.399239  295475 retry.go:31] will retry after 1.154708418s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:26.717772  295475 retry.go:31] will retry after 2.517875452s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:29.413048  295475 retry.go:31] will retry after 2.308659827s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:31.883094  295475 retry.go:31] will retry after 6.731087546s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:38.783470  295475 retry.go:31] will retry after 4.533398601s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
I1026 08:38:43.513388  295475 retry.go:31] will retry after 9.712472013s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4 10.244.2.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-cm8cd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-h2f8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-lb2w6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-cm8cd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-h2f8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-lb2w6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-cm8cd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-h2f8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-lb2w6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (36.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-cm8cd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-cm8cd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-h2f8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-h2f8r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-lb2w6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 kubectl -- exec busybox-7b57f96db7-lb2w6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node add --alsologtostderr -v 5
E1026 08:38:59.192618  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.198964  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.210261  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.231547  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.272894  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.354258  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.515706  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:38:59.837144  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:00.478530  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:01.759846  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:04.321622  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:09.443520  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:19.685296  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:39:40.166884  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 node add --alsologtostderr -v 5: (58.691494685s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5: (1.109481695s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-232402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084993049s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 status --output json --alsologtostderr -v 5: (1.120849023s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp testdata/cp-test.txt ha-232402:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402_ha-232402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test_ha-232402_ha-232402-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402:/home/docker/cp-test.txt ha-232402-m03:/home/docker/cp-test_ha-232402_ha-232402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test_ha-232402_ha-232402-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402:/home/docker/cp-test.txt ha-232402-m04:/home/docker/cp-test_ha-232402_ha-232402-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test_ha-232402_ha-232402-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp testdata/cp-test.txt ha-232402-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m02:/home/docker/cp-test.txt ha-232402:/home/docker/cp-test_ha-232402-m02_ha-232402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test_ha-232402-m02_ha-232402.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m02:/home/docker/cp-test.txt ha-232402-m03:/home/docker/cp-test_ha-232402-m02_ha-232402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test_ha-232402-m02_ha-232402-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m02:/home/docker/cp-test.txt ha-232402-m04:/home/docker/cp-test_ha-232402-m02_ha-232402-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test_ha-232402-m02_ha-232402-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp testdata/cp-test.txt ha-232402-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402:/home/docker/cp-test_ha-232402-m03_ha-232402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402-m03_ha-232402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m03:/home/docker/cp-test.txt ha-232402-m04:/home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test_ha-232402-m03_ha-232402-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp testdata/cp-test.txt ha-232402-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1668130144/001/cp-test_ha-232402-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402:/home/docker/cp-test_ha-232402-m04_ha-232402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402 "sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m02:/home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m02 "sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 cp ha-232402-m04:/home/docker/cp-test.txt ha-232402-m03:/home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 ssh -n ha-232402-m03 "sudo cat /home/docker/cp-test_ha-232402-m04_ha-232402-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node stop m02 --alsologtostderr -v 5
E1026 08:40:21.128236  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 node stop m02 --alsologtostderr -v 5: (12.068027266s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5: exit status 7 (856.00539ms)

                                                
                                                
-- stdout --
	ha-232402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-232402-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232402-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-232402-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:40:31.105847  339303 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:40:31.105993  339303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:40:31.105999  339303 out.go:374] Setting ErrFile to fd 2...
	I1026 08:40:31.106003  339303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:40:31.106305  339303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:40:31.106738  339303 out.go:368] Setting JSON to false
	I1026 08:40:31.106805  339303 mustload.go:65] Loading cluster: ha-232402
	I1026 08:40:31.106903  339303 notify.go:220] Checking for updates...
	I1026 08:40:31.108061  339303 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:40:31.108088  339303 status.go:174] checking status of ha-232402 ...
	I1026 08:40:31.108780  339303 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:40:31.128930  339303 status.go:371] ha-232402 host status = "Running" (err=<nil>)
	I1026 08:40:31.128959  339303 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:40:31.129374  339303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402
	I1026 08:40:31.170418  339303 host.go:66] Checking if "ha-232402" exists ...
	I1026 08:40:31.170909  339303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:40:31.170980  339303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402
	I1026 08:40:31.200980  339303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402/id_rsa Username:docker}
	I1026 08:40:31.305249  339303 ssh_runner.go:195] Run: systemctl --version
	I1026 08:40:31.311964  339303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:40:31.329373  339303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:40:31.420678  339303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-26 08:40:31.409342344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 08:40:31.421285  339303 kubeconfig.go:125] found "ha-232402" server: "https://192.168.49.254:8443"
	I1026 08:40:31.421325  339303 api_server.go:166] Checking apiserver status ...
	I1026 08:40:31.421376  339303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:40:31.434540  339303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I1026 08:40:31.444093  339303 api_server.go:182] apiserver freezer: "6:freezer:/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio/crio-b67928d9d6a69f9885520df37809d06ee6f669c0f50d6ca50cdfc9228836f737"
	I1026 08:40:31.444174  339303 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/601e5c9ab7d1f5693bcebea4664b9f53f81966eef0b715253a90470c33b9c9a7/crio/crio-b67928d9d6a69f9885520df37809d06ee6f669c0f50d6ca50cdfc9228836f737/freezer.state
	I1026 08:40:31.453858  339303 api_server.go:204] freezer state: "THAWED"
	I1026 08:40:31.453894  339303 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 08:40:31.462872  339303 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 08:40:31.462902  339303 status.go:463] ha-232402 apiserver status = Running (err=<nil>)
	I1026 08:40:31.462914  339303 status.go:176] ha-232402 status: &{Name:ha-232402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:40:31.462940  339303 status.go:174] checking status of ha-232402-m02 ...
	I1026 08:40:31.463261  339303 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:40:31.480477  339303 status.go:371] ha-232402-m02 host status = "Stopped" (err=<nil>)
	I1026 08:40:31.480505  339303 status.go:384] host is not running, skipping remaining checks
	I1026 08:40:31.480540  339303 status.go:176] ha-232402-m02 status: &{Name:ha-232402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:40:31.480563  339303 status.go:174] checking status of ha-232402-m03 ...
	I1026 08:40:31.480902  339303 cli_runner.go:164] Run: docker container inspect ha-232402-m03 --format={{.State.Status}}
	I1026 08:40:31.498487  339303 status.go:371] ha-232402-m03 host status = "Running" (err=<nil>)
	I1026 08:40:31.498515  339303 host.go:66] Checking if "ha-232402-m03" exists ...
	I1026 08:40:31.498936  339303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m03
	I1026 08:40:31.519109  339303 host.go:66] Checking if "ha-232402-m03" exists ...
	I1026 08:40:31.519459  339303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:40:31.519509  339303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m03
	I1026 08:40:31.538303  339303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m03/id_rsa Username:docker}
	I1026 08:40:31.643831  339303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:40:31.660267  339303 kubeconfig.go:125] found "ha-232402" server: "https://192.168.49.254:8443"
	I1026 08:40:31.660292  339303 api_server.go:166] Checking apiserver status ...
	I1026 08:40:31.660349  339303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:40:31.680755  339303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	I1026 08:40:31.690547  339303 api_server.go:182] apiserver freezer: "6:freezer:/docker/54c1a9d2e718b8f4da315afbe4ea4bed7362bf888bd8b2a95fdae86502d9b55c/crio/crio-bc6a418faa9ab87e661547deea36b9c790022335651629988c0e1c2e0b176b35"
	I1026 08:40:31.690667  339303 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/54c1a9d2e718b8f4da315afbe4ea4bed7362bf888bd8b2a95fdae86502d9b55c/crio/crio-bc6a418faa9ab87e661547deea36b9c790022335651629988c0e1c2e0b176b35/freezer.state
	I1026 08:40:31.700901  339303 api_server.go:204] freezer state: "THAWED"
	I1026 08:40:31.700930  339303 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 08:40:31.710379  339303 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 08:40:31.710409  339303 status.go:463] ha-232402-m03 apiserver status = Running (err=<nil>)
	I1026 08:40:31.710436  339303 status.go:176] ha-232402-m03 status: &{Name:ha-232402-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:40:31.710476  339303 status.go:174] checking status of ha-232402-m04 ...
	I1026 08:40:31.710946  339303 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:40:31.728742  339303 status.go:371] ha-232402-m04 host status = "Running" (err=<nil>)
	I1026 08:40:31.728769  339303 host.go:66] Checking if "ha-232402-m04" exists ...
	I1026 08:40:31.729061  339303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-232402-m04
	I1026 08:40:31.747401  339303 host.go:66] Checking if "ha-232402-m04" exists ...
	I1026 08:40:31.747721  339303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:40:31.747770  339303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-232402-m04
	I1026 08:40:31.766290  339303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/ha-232402-m04/id_rsa Username:docker}
	I1026 08:40:31.868233  339303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:40:31.886626  339303 status.go:176] ha-232402-m04 status: &{Name:ha-232402-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (108.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node start m02 --alsologtostderr -v 5
E1026 08:41:13.919106  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:41:43.051937  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 node start m02 --alsologtostderr -v 5: (1m46.84228812s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5: (1.177271294s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (108.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.298805035s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 node delete m03 --alsologtostderr -v 5: (11.022647549s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 stop --alsologtostderr -v 5: (36.056259889s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5: exit status 7 (110.718901ms)

                                                
                                                
-- stdout --
	ha-232402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232402-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232402-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:50:20.688599  351940 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:50:20.688805  351940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:50:20.688834  351940 out.go:374] Setting ErrFile to fd 2...
	I1026 08:50:20.688854  351940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:50:20.689156  351940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 08:50:20.689406  351940 out.go:368] Setting JSON to false
	I1026 08:50:20.689474  351940 mustload.go:65] Loading cluster: ha-232402
	I1026 08:50:20.689548  351940 notify.go:220] Checking for updates...
	I1026 08:50:20.690599  351940 config.go:182] Loaded profile config "ha-232402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:50:20.690644  351940 status.go:174] checking status of ha-232402 ...
	I1026 08:50:20.691330  351940 cli_runner.go:164] Run: docker container inspect ha-232402 --format={{.State.Status}}
	I1026 08:50:20.709414  351940 status.go:371] ha-232402 host status = "Stopped" (err=<nil>)
	I1026 08:50:20.709440  351940 status.go:384] host is not running, skipping remaining checks
	I1026 08:50:20.709448  351940 status.go:176] ha-232402 status: &{Name:ha-232402 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:50:20.709474  351940 status.go:174] checking status of ha-232402-m02 ...
	I1026 08:50:20.709780  351940 cli_runner.go:164] Run: docker container inspect ha-232402-m02 --format={{.State.Status}}
	I1026 08:50:20.728605  351940 status.go:371] ha-232402-m02 host status = "Stopped" (err=<nil>)
	I1026 08:50:20.728631  351940 status.go:384] host is not running, skipping remaining checks
	I1026 08:50:20.728638  351940 status.go:176] ha-232402-m02 status: &{Name:ha-232402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:50:20.728659  351940 status.go:174] checking status of ha-232402-m04 ...
	I1026 08:50:20.728960  351940 cli_runner.go:164] Run: docker container inspect ha-232402-m04 --format={{.State.Status}}
	I1026 08:50:20.749388  351940 status.go:371] ha-232402-m04 host status = "Stopped" (err=<nil>)
	I1026 08:50:20.749420  351940 status.go:384] host is not running, skipping remaining checks
	I1026 08:50:20.749428  351940 status.go:176] ha-232402-m04 status: &{Name:ha-232402-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1026 08:51:13.918591  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m26.355621498s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 node add --control-plane --alsologtostderr -v 5: (1m17.972439156s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-232402 status --alsologtostderr -v 5: (1.164587997s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.140900663s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-284707 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1026 08:53:59.192847  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:54:16.990861  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-284707 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.583396249s)
--- PASS: TestJSONOutput/start/Command (77.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-284707 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-284707 --output=json --user=testUser: (5.875256449s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-443293 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-443293 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.145361ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"058e8c3b-9dd6-4106-9d8e-dd59b1fa8c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-443293] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20ae4b6f-66b5-4045-ad85-53cb539b5710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"9538dc65-20be-453b-b958-cef77dd77883","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e9a69779-378d-47b5-865c-d36d0875c8cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig"}}
	{"specversion":"1.0","id":"47b51fea-9a6f-48ad-b96d-7a768d5eae02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube"}}
	{"specversion":"1.0","id":"bc57621f-24cb-456b-8a1c-e3adc55ad8e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"04b741e5-cdff-4681-85c8-bc11818e7fd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c22318ad-3911-4607-ae36-ed288b59eb45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-443293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-443293
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-110262 --network=
E1026 08:55:22.256203  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-110262 --network=: (40.236809351s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-110262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-110262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-110262: (2.190444769s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-284417 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-284417 --network=bridge: (31.042596613s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-284417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-284417
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-284417: (2.122895116s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.20s)

                                                
                                    
x
+
TestKicExistingNetwork (38.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1026 08:56:09.216122  295475 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 08:56:09.232104  295475 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 08:56:09.232838  295475 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1026 08:56:09.232875  295475 cli_runner.go:164] Run: docker network inspect existing-network
W1026 08:56:09.249346  295475 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1026 08:56:09.249377  295475 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1026 08:56:09.249392  295475 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1026 08:56:09.249490  295475 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 08:56:09.267640  295475 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-256d72a548e0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:db:22:fd:98:ff} reservation:<nil>}
I1026 08:56:09.268001  295475 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019438a0}
I1026 08:56:09.268025  295475 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1026 08:56:09.268077  295475 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1026 08:56:09.328698  295475 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-728427 --network=existing-network
E1026 08:56:13.921716  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-728427 --network=existing-network: (35.877780895s)
helpers_test.go:175: Cleaning up "existing-network-728427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-728427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-728427: (2.244662347s)
I1026 08:56:47.468092  295475 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.27s)

                                                
                                    
x
+
TestKicCustomSubnet (36.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-937518 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-937518 --subnet=192.168.60.0/24: (34.382714202s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-937518 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-937518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-937518
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-937518: (2.202747264s)
--- PASS: TestKicCustomSubnet (36.61s)

                                                
                                    
x
+
TestKicStaticIP (35.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-679450 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-679450 --static-ip=192.168.200.200: (33.350741881s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-679450 ip
helpers_test.go:175: Cleaning up "static-ip-679450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-679450
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-679450: (2.189059753s)
--- PASS: TestKicStaticIP (35.71s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-453244 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-453244 --driver=docker  --container-runtime=crio: (31.33861443s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-455820 --driver=docker  --container-runtime=crio
E1026 08:58:59.194045  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-455820 --driver=docker  --container-runtime=crio: (36.564880553s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-453244
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-455820
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-455820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-455820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-455820: (2.140736019s)
helpers_test.go:175: Cleaning up "first-453244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-453244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-453244: (2.047001667s)
--- PASS: TestMinikubeProfile (73.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-036806 --memory=3072 --mount-string /tmp/TestMountStartserial307338528/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-036806 --memory=3072 --mount-string /tmp/TestMountStartserial307338528/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.927440702s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-036806 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-038758 --memory=3072 --mount-string /tmp/TestMountStartserial307338528/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-038758 --memory=3072 --mount-string /tmp/TestMountStartserial307338528/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.901975385s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-038758 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-036806 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-036806 --alsologtostderr -v=5: (1.740658445s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-038758 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-038758
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-038758: (1.289848563s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-038758
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-038758: (7.408625793s)
--- PASS: TestMountStart/serial/RestartStopped (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-038758 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887730 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1026 09:01:13.919352  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887730 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.776628767s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-887730 -- rollout status deployment/busybox: (3.023543712s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-6qhsw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-kmgkw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-6qhsw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-kmgkw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-6qhsw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-kmgkw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-6qhsw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-6qhsw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-kmgkw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887730 -- exec busybox-7b57f96db7-kmgkw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-887730 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-887730 -v=5 --alsologtostderr: (58.640386933s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-887730 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp testdata/cp-test.txt multinode-887730:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804648938/001/cp-test_multinode-887730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730:/home/docker/cp-test.txt multinode-887730-m02:/home/docker/cp-test_multinode-887730_multinode-887730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test_multinode-887730_multinode-887730-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730:/home/docker/cp-test.txt multinode-887730-m03:/home/docker/cp-test_multinode-887730_multinode-887730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test_multinode-887730_multinode-887730-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp testdata/cp-test.txt multinode-887730-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804648938/001/cp-test_multinode-887730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m02:/home/docker/cp-test.txt multinode-887730:/home/docker/cp-test_multinode-887730-m02_multinode-887730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test_multinode-887730-m02_multinode-887730.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m02:/home/docker/cp-test.txt multinode-887730-m03:/home/docker/cp-test_multinode-887730-m02_multinode-887730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test_multinode-887730-m02_multinode-887730-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp testdata/cp-test.txt multinode-887730-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804648938/001/cp-test_multinode-887730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m03:/home/docker/cp-test.txt multinode-887730:/home/docker/cp-test_multinode-887730-m03_multinode-887730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730 "sudo cat /home/docker/cp-test_multinode-887730-m03_multinode-887730.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 cp multinode-887730-m03:/home/docker/cp-test.txt multinode-887730-m02:/home/docker/cp-test_multinode-887730-m03_multinode-887730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 ssh -n multinode-887730-m02 "sudo cat /home/docker/cp-test_multinode-887730-m03_multinode-887730-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-887730 node stop m03: (1.335205717s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887730 status: exit status 7 (522.443328ms)

                                                
                                                
-- stdout --
	multinode-887730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-887730-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-887730-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr: exit status 7 (552.724473ms)

                                                
                                                
-- stdout --
	multinode-887730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-887730-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-887730-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:03:21.369296  402638 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:03:21.369473  402638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:03:21.369511  402638 out.go:374] Setting ErrFile to fd 2...
	I1026 09:03:21.369534  402638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:03:21.369821  402638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:03:21.370068  402638 out.go:368] Setting JSON to false
	I1026 09:03:21.370139  402638 mustload.go:65] Loading cluster: multinode-887730
	I1026 09:03:21.370215  402638 notify.go:220] Checking for updates...
	I1026 09:03:21.371219  402638 config.go:182] Loaded profile config "multinode-887730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:03:21.371245  402638 status.go:174] checking status of multinode-887730 ...
	I1026 09:03:21.372823  402638 cli_runner.go:164] Run: docker container inspect multinode-887730 --format={{.State.Status}}
	I1026 09:03:21.391661  402638 status.go:371] multinode-887730 host status = "Running" (err=<nil>)
	I1026 09:03:21.391687  402638 host.go:66] Checking if "multinode-887730" exists ...
	I1026 09:03:21.392122  402638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-887730
	I1026 09:03:21.422293  402638 host.go:66] Checking if "multinode-887730" exists ...
	I1026 09:03:21.422645  402638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:03:21.422806  402638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-887730
	I1026 09:03:21.441097  402638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/multinode-887730/id_rsa Username:docker}
	I1026 09:03:21.544021  402638 ssh_runner.go:195] Run: systemctl --version
	I1026 09:03:21.550681  402638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:03:21.563885  402638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:03:21.630917  402638 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 09:03:21.620323969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:03:21.631502  402638 kubeconfig.go:125] found "multinode-887730" server: "https://192.168.67.2:8443"
	I1026 09:03:21.631542  402638 api_server.go:166] Checking apiserver status ...
	I1026 09:03:21.631591  402638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 09:03:21.643313  402638 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1026 09:03:21.651971  402638 api_server.go:182] apiserver freezer: "6:freezer:/docker/18933ed32613bacaf8ed9c48137b267e8cb6cfbfb1d0a1200890d22277e6bb48/crio/crio-e82d88ea6cc8be474a375f5068db0bc85973d323fefd38ca233f49f37433b900"
	I1026 09:03:21.652036  402638 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18933ed32613bacaf8ed9c48137b267e8cb6cfbfb1d0a1200890d22277e6bb48/crio/crio-e82d88ea6cc8be474a375f5068db0bc85973d323fefd38ca233f49f37433b900/freezer.state
	I1026 09:03:21.659630  402638 api_server.go:204] freezer state: "THAWED"
	I1026 09:03:21.659668  402638 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1026 09:03:21.668051  402638 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1026 09:03:21.668079  402638 status.go:463] multinode-887730 apiserver status = Running (err=<nil>)
	I1026 09:03:21.668091  402638 status.go:176] multinode-887730 status: &{Name:multinode-887730 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 09:03:21.668108  402638 status.go:174] checking status of multinode-887730-m02 ...
	I1026 09:03:21.668408  402638 cli_runner.go:164] Run: docker container inspect multinode-887730-m02 --format={{.State.Status}}
	I1026 09:03:21.689803  402638 status.go:371] multinode-887730-m02 host status = "Running" (err=<nil>)
	I1026 09:03:21.689830  402638 host.go:66] Checking if "multinode-887730-m02" exists ...
	I1026 09:03:21.690145  402638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-887730-m02
	I1026 09:03:21.707206  402638 host.go:66] Checking if "multinode-887730-m02" exists ...
	I1026 09:03:21.707531  402638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 09:03:21.707576  402638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-887730-m02
	I1026 09:03:21.724886  402638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21772-293616/.minikube/machines/multinode-887730-m02/id_rsa Username:docker}
	I1026 09:03:21.828469  402638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 09:03:21.842524  402638 status.go:176] multinode-887730-m02 status: &{Name:multinode-887730-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 09:03:21.842564  402638 status.go:174] checking status of multinode-887730-m03 ...
	I1026 09:03:21.842957  402638 cli_runner.go:164] Run: docker container inspect multinode-887730-m03 --format={{.State.Status}}
	I1026 09:03:21.860695  402638 status.go:371] multinode-887730-m03 host status = "Stopped" (err=<nil>)
	I1026 09:03:21.860741  402638 status.go:384] host is not running, skipping remaining checks
	I1026 09:03:21.860750  402638 status.go:176] multinode-887730-m03 status: &{Name:multinode-887730-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-887730 node start m03 -v=5 --alsologtostderr: (7.14173643s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887730
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-887730
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-887730: (25.190617589s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887730 --wait=true -v=5 --alsologtostderr
E1026 09:03:59.193495  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887730 --wait=true -v=5 --alsologtostderr: (51.899509829s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887730
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-887730 node delete m03: (5.019663059s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-887730 stop: (23.810860251s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887730 status: exit status 7 (104.932024ms)

                                                
                                                
-- stdout --
	multinode-887730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-887730-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr: exit status 7 (99.318237ms)

                                                
                                                
-- stdout --
	multinode-887730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-887730-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:05:16.758611  410367 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:05:16.758880  410367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:05:16.758919  410367 out.go:374] Setting ErrFile to fd 2...
	I1026 09:05:16.758940  410367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:05:16.759253  410367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:05:16.759532  410367 out.go:368] Setting JSON to false
	I1026 09:05:16.759615  410367 mustload.go:65] Loading cluster: multinode-887730
	I1026 09:05:16.759694  410367 notify.go:220] Checking for updates...
	I1026 09:05:16.760111  410367 config.go:182] Loaded profile config "multinode-887730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:05:16.760151  410367 status.go:174] checking status of multinode-887730 ...
	I1026 09:05:16.761099  410367 cli_runner.go:164] Run: docker container inspect multinode-887730 --format={{.State.Status}}
	I1026 09:05:16.779517  410367 status.go:371] multinode-887730 host status = "Stopped" (err=<nil>)
	I1026 09:05:16.779539  410367 status.go:384] host is not running, skipping remaining checks
	I1026 09:05:16.779546  410367 status.go:176] multinode-887730 status: &{Name:multinode-887730 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 09:05:16.779587  410367 status.go:174] checking status of multinode-887730-m02 ...
	I1026 09:05:16.779897  410367 cli_runner.go:164] Run: docker container inspect multinode-887730-m02 --format={{.State.Status}}
	I1026 09:05:16.803270  410367 status.go:371] multinode-887730-m02 host status = "Stopped" (err=<nil>)
	I1026 09:05:16.803336  410367 status.go:384] host is not running, skipping remaining checks
	I1026 09:05:16.803357  410367 status.go:176] multinode-887730-m02 status: &{Name:multinode-887730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887730 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887730 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (56.192325577s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887730 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887730
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887730-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-887730-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.587545ms)

                                                
                                                
-- stdout --
	* [multinode-887730-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-887730-m02' is duplicated with machine name 'multinode-887730-m02' in profile 'multinode-887730'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887730-m03 --driver=docker  --container-runtime=crio
E1026 09:06:13.919147  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887730-m03 --driver=docker  --container-runtime=crio: (35.218359141s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-887730
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-887730: exit status 80 (367.662462ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-887730 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-887730-m03 already exists in multinode-887730-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-887730-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-887730-m03: (2.045741569s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.78s)

                                                
                                    
x
+
TestPreload (130.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-887444 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-887444 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.980290912s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-887444 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-887444 image pull gcr.io/k8s-minikube/busybox: (2.20558433s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-887444
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-887444: (5.924725204s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-887444 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1026 09:08:59.192974  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-887444 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.662144515s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-887444 image list
helpers_test.go:175: Cleaning up "test-preload-887444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-887444
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-887444: (2.449513703s)
--- PASS: TestPreload (130.47s)

                                                
                                    
x
+
TestScheduledStopUnix (109.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-820137 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-820137 --memory=3072 --driver=docker  --container-runtime=crio: (32.47998731s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-820137 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-820137 -n scheduled-stop-820137
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-820137 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 09:09:39.241846  295475 retry.go:31] will retry after 85.038µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.242041  295475 retry.go:31] will retry after 110.466µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.243171  295475 retry.go:31] will retry after 325.294µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.244350  295475 retry.go:31] will retry after 408.72µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.245472  295475 retry.go:31] will retry after 412.779µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.246593  295475 retry.go:31] will retry after 674.884µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.247748  295475 retry.go:31] will retry after 1.184007ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.249957  295475 retry.go:31] will retry after 880.267µs: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.251086  295475 retry.go:31] will retry after 1.845537ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.253291  295475 retry.go:31] will retry after 4.781982ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.258517  295475 retry.go:31] will retry after 4.730384ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.263748  295475 retry.go:31] will retry after 11.291163ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.275992  295475 retry.go:31] will retry after 12.295024ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.290462  295475 retry.go:31] will retry after 24.326283ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.325464  295475 retry.go:31] will retry after 25.489358ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
I1026 09:09:39.351760  295475 retry.go:31] will retry after 52.15797ms: open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/scheduled-stop-820137/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-820137 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-820137 -n scheduled-stop-820137
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-820137
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-820137 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-820137
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-820137: exit status 7 (76.634675ms)

                                                
                                                
-- stdout --
	scheduled-stop-820137
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-820137 -n scheduled-stop-820137
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-820137 -n scheduled-stop-820137: exit status 7 (68.090602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-820137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-820137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-820137: (5.722730202s)
--- PASS: TestScheduledStopUnix (109.87s)

                                                
                                    
x
+
TestInsufficientStorage (13.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-615691 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1026 09:10:56.992538  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-615691 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.443275235s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3596bd70-472c-439d-a884-6c18de933ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-615691] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5eb32d7f-5e75-466e-86df-e6d419942809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"c866dd07-a364-4301-a83a-903074960654","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"84e84d58-1d2b-445e-a049-0a32f28002d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig"}}
	{"specversion":"1.0","id":"2b09f2bc-c6d4-4680-b1e8-f979ebd8b3ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube"}}
	{"specversion":"1.0","id":"c8ecffb1-5f46-4aa4-8b61-ec3b65d599f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b9bcb985-33df-491a-895e-1b9f5643c20b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"caa5100c-e59d-4218-abef-4f2db78d4cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"45c4e5b0-b709-4aca-8b12-5893a91bff39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4a103edc-b82b-49b5-8866-f41c5440a253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"85451caa-8e1a-414a-9ec7-15d596f4385a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"354a95f1-ff25-4aef-9373-1f0a9bb1d912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-615691\" primary control-plane node in \"insufficient-storage-615691\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"648dd437-5697-444c-ada5-bf51b81a5833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5a4cf51-215b-485f-81f6-144c4d5ec73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0db44207-875a-49fe-bd94-b79b2d896662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-615691 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-615691 --output=json --layout=cluster: exit status 7 (293.76645ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-615691","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-615691","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 09:11:06.829853  426570 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-615691" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-615691 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-615691 --output=json --layout=cluster: exit status 7 (297.914384ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-615691","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-615691","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 09:11:07.130029  426637 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-615691" does not appear in /home/jenkins/minikube-integration/21772-293616/kubeconfig
	E1026 09:11:07.139994  426637 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/insufficient-storage-615691/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-615691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-615691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-615691: (2.018596989s)
--- PASS: TestInsufficientStorage (13.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3907571247 start -p running-upgrade-931705 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3907571247 start -p running-upgrade-931705 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.257675525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-931705 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-931705 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.898808117s)
helpers_test.go:175: Cleaning up "running-upgrade-931705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-931705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-931705: (1.995083908s)
--- PASS: TestRunningBinaryUpgrade (50.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (122.53s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.741015717 start -p missing-upgrade-019301 --memory=3072 --driver=docker  --container-runtime=crio
E1026 09:11:13.918500  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.741015717 start -p missing-upgrade-019301 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.192847245s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-019301
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-019301
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-019301 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-019301 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.769758427s)
helpers_test.go:175: Cleaning up "missing-upgrade-019301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-019301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-019301: (2.220779126s)
--- PASS: TestMissingContainerUpgrade (122.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (113.303224ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-948910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-948910 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-948910 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.645161037s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-948910 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.713888587s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-948910 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-948910 status -o json: exit status 2 (409.898926ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-948910","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-948910
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-948910: (2.252820594s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1026 09:12:02.258372  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-948910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.71261304s)
--- PASS: TestNoKubernetes/serial/Start (10.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-948910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-948910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (399.651481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (2.209365567s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-948910
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-948910: (1.355228768s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-948910 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-948910 --driver=docker  --container-runtime=crio: (7.592153147s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-948910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-948910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.39888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.981175541 start -p stopped-upgrade-017998 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.981175541 start -p stopped-upgrade-017998 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.146164854s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.981175541 -p stopped-upgrade-017998 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.981175541 -p stopped-upgrade-017998 stop: (1.378437701s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-017998 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1026 09:13:59.193052  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-017998 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.63977243s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-017998
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-017998: (1.176642605s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (80.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-827956 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1026 09:16:13.918920  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-827956 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.147284987s)
--- PASS: TestPause/serial/Start (80.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-827956 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-827956 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.929732274s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-796399 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-796399 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.798153ms)

                                                
                                                
-- stdout --
	* [false-796399] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 09:17:46.076968  463291 out.go:360] Setting OutFile to fd 1 ...
	I1026 09:17:46.077087  463291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:17:46.077098  463291 out.go:374] Setting ErrFile to fd 2...
	I1026 09:17:46.077104  463291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 09:17:46.077417  463291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-293616/.minikube/bin
	I1026 09:17:46.077863  463291 out.go:368] Setting JSON to false
	I1026 09:17:46.078756  463291 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10816,"bootTime":1761459450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 09:17:46.078824  463291 start.go:141] virtualization:  
	I1026 09:17:46.082303  463291 out.go:179] * [false-796399] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 09:17:46.086117  463291 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 09:17:46.086209  463291 notify.go:220] Checking for updates...
	I1026 09:17:46.092447  463291 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 09:17:46.095562  463291 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-293616/kubeconfig
	I1026 09:17:46.098508  463291 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-293616/.minikube
	I1026 09:17:46.101341  463291 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 09:17:46.104289  463291 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 09:17:46.107828  463291 config.go:182] Loaded profile config "kubernetes-upgrade-275732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 09:17:46.107943  463291 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 09:17:46.131189  463291 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 09:17:46.131316  463291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 09:17:46.189497  463291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 09:17:46.1802753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 09:17:46.189603  463291 docker.go:318] overlay module found
	I1026 09:17:46.192743  463291 out.go:179] * Using the docker driver based on user configuration
	I1026 09:17:46.195647  463291 start.go:305] selected driver: docker
	I1026 09:17:46.195668  463291 start.go:925] validating driver "docker" against <nil>
	I1026 09:17:46.195683  463291 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 09:17:46.199288  463291 out.go:203] 
	W1026 09:17:46.202181  463291 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 09:17:46.205046  463291 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-796399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-275732
contexts:
- context:
cluster: kubernetes-upgrade-275732
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-275732
name: kubernetes-upgrade-275732
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-275732
user:
client-certificate: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.crt
client-key: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-796399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796399"

                                                
                                                
----------------------- debugLogs end: false-796399 [took: 3.511589805s] --------------------------------
helpers_test.go:175: Cleaning up "false-796399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-796399
--- PASS: TestNetworkPlugins/group/false (3.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (73.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m13.618545643s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (73.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.451691198s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-167519 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8840eaba-c0c0-4054-98c5-7062e5c2f5e4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8840eaba-c0c0-4054-98c5-7062e5c2f5e4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003362278s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-167519 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-167519 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-167519 --alsologtostderr -v=3: (12.150816968s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519: exit status 7 (82.002561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-167519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-167519 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.968128339s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-167519 -n old-k8s-version-167519
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [10b1f4ca-a577-4294-8ca9-e260f0eb3247] Pending
helpers_test.go:352: "busybox" [10b1f4ca-a577-4294-8ca9-e260f0eb3247] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [10b1f4ca-a577-4294-8ca9-e260f0eb3247] Running
E1026 09:23:59.192937  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004331874s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-289159 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-289159 --alsologtostderr -v=3: (12.477329597s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159: exit status 7 (100.519201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-289159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-289159 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.407632174s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-289159 -n default-k8s-diff-port-289159
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2z5gd" [fab603ff-4a4d-4c2c-9dc6-16afee3b82cc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003136216s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2z5gd" [fab603ff-4a4d-4c2c-9dc6-16afee3b82cc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004008528s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-167519 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-167519 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.382461908s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jkxkb" [f51b3f5f-7944-4fbc-8663-fc9647be0c2f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00517416s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jkxkb" [f51b3f5f-7944-4fbc-8663-fc9647be0c2f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005748741s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-289159 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-289159 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 09:26:13.918633  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.117060861s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-204381 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7bfa86d2-0d6d-4a37-b944-03fd17347db8] Pending
helpers_test.go:352: "busybox" [7bfa86d2-0d6d-4a37-b944-03fd17347db8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7bfa86d2-0d6d-4a37-b944-03fd17347db8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004109051s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-204381 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-204381 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-204381 --alsologtostderr -v=3: (12.16055423s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-491604 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9e3cede7-8f2e-49cf-bdc2-b16fe5818763] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9e3cede7-8f2e-49cf-bdc2-b16fe5818763] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004048051s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-491604 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381: exit status 7 (78.055917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-204381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-204381 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.200369004s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204381 -n embed-certs-204381
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-491604 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-491604 --alsologtostderr -v=3: (12.410567585s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604: exit status 7 (89.885528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-491604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 09:27:36.994785  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/addons-178002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-491604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.821001273s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491604 -n no-preload-491604
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ff88" [1d408d28-04be-46eb-9ff5-f6ecf8801b89] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534517s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ff88" [1d408d28-04be-46eb-9ff5-f6ecf8801b89] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00303076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-204381 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-204381 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.148364911s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7ljxx" [f81533b4-a61b-4898-9998-d631198e8d6b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003377045s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7ljxx" [f81533b4-a61b-4898-9998-d631198e8d6b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004164699s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-491604 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491604 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1026 09:28:28.755335  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:28.761716  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:28.773102  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:28.794447  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:28.836104  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:28.919005  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:29.080824  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:29.402101  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:30.043880  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:31.325195  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:33.886872  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:39.009897  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:42.261081  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.015352833s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-596581 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-596581 --alsologtostderr -v=3: (1.486119692s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581: exit status 7 (93.249896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-596581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 09:28:55.159905  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.166235  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.177568  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.198880  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.240253  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.321603  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.483055  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:55.804625  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:56.446186  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:57.728180  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:28:59.192875  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/functional-622437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:29:00.289836  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:29:05.411166  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:29:09.751799  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-596581 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (20.330337438s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-596581 -n newest-cni-596581
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-596581 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1026 09:29:36.134652  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:29:50.713414  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.462548033s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-796399 "pgrep -a kubelet"
I1026 09:29:54.044805  295475 config.go:182] Loaded profile config "auto-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-796399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f44kh" [2a17fec1-ee6b-4766-947c-0b2d78504eb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f44kh" [2a17fec1-ee6b-4766-947c-0b2d78504eb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003757705s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.014261626s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2vdv9" [3fe3edf9-7a36-485d-b008-ee293f3a45aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004047253s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-796399 "pgrep -a kubelet"
I1026 09:30:51.540675  295475 config.go:182] Loaded profile config "kindnet-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-796399 replace --force -f testdata/netcat-deployment.yaml
I1026 09:30:51.924192  295475 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kpldj" [08f1d59e-d129-4078-93ce-4fa5e012fe94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kpldj" [08f1d59e-d129-4078-93ce-4fa5e012fe94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003968039s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.426728538s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5hvxq" [38a46cb2-04b8-4898-8c2f-86611480400c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5hvxq" [38a46cb2-04b8-4898-8c2f-86611480400c] Running
E1026 09:31:39.017477  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003950735s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-796399 "pgrep -a kubelet"
I1026 09:31:40.064529  295475 config.go:182] Loaded profile config "calico-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-796399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-78l9f" [01484000-2d2a-4747-a41f-9ec071db635e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-78l9f" [01484000-2d2a-4747-a41f-9ec071db635e] Running
E1026 09:31:47.084831  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.091185  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.102599  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.124005  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.165378  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.246794  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.408240  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:47.730220  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:48.371664  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 09:31:49.653185  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004402968s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1026 09:32:28.060235  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m16.618290657s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-796399 "pgrep -a kubelet"
I1026 09:32:35.515215  295475 config.go:182] Loaded profile config "custom-flannel-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-796399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-89hfs" [e9ad3007-b36e-40e4-8a17-56e45d145fd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-89hfs" [e9ad3007-b36e-40e4-8a17-56e45d145fd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00303527s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1026 09:33:28.755804  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/old-k8s-version-167519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.590591064s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-796399 "pgrep -a kubelet"
I1026 09:33:36.517518  295475 config.go:182] Loaded profile config "enable-default-cni-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-796399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cgvmb" [fde10ada-42f8-441a-b146-a6728dc97321] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cgvmb" [fde10ada-42f8-441a-b146-a6728dc97321] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003172065s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-796399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.98126867s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-jsfr9" [ce2ca6bc-b345-4d2c-8b8c-5228a5d5fb55] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003829149s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-796399 "pgrep -a kubelet"
I1026 09:34:22.636859  295475 config.go:182] Loaded profile config "flannel-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-796399 replace --force -f testdata/netcat-deployment.yaml
E1026 09:34:22.858884  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/default-k8s-diff-port-289159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dr84c" [9dbc3291-9682-4587-b970-2ae93d645e2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dr84c" [9dbc3291-9682-4587-b970-2ae93d645e2d] Running
E1026 09:34:30.944304  295475 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/no-preload-491604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005284593s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-796399 "pgrep -a kubelet"
I1026 09:35:23.140748  295475 config.go:182] Loaded profile config "bridge-796399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-796399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lwqvc" [7ae3391a-55c0-4b87-afca-f0f7d8accea5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lwqvc" [7ae3391a-55c0-4b87-afca-f0f7d8accea5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003732739s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-796399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-796399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-436037 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-436037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-436037
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-434228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-434228
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-796399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-275732
contexts:
- context:
cluster: kubernetes-upgrade-275732
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-275732
name: kubernetes-upgrade-275732
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-275732
user:
client-certificate: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.crt
client-key: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-796399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796399"

                                                
                                                
----------------------- debugLogs end: kubenet-796399 [took: 3.835884604s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-796399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-796399
--- SKIP: TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-796399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-796399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-293616/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-275732
contexts:
- context:
cluster: kubernetes-upgrade-275732
extensions:
- extension:
last-update: Sun, 26 Oct 2025 09:15:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-275732
name: kubernetes-upgrade-275732
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-275732
user:
client-certificate: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.crt
client-key: /home/jenkins/minikube-integration/21772-293616/.minikube/profiles/kubernetes-upgrade-275732/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-796399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-796399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796399"

                                                
                                                
----------------------- debugLogs end: cilium-796399 [took: 4.010049723s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-796399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-796399
--- SKIP: TestNetworkPlugins/group/cilium (4.17s)

                                                
                                    
Copied to clipboard